Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
6,000 | 6,428 | The Sound of APALM Clapping: Faster Nonsmooth
Nonconvex Optimization with Stochastic
Asynchronous PALM
Damek Davis and Madeleine Udell
Cornell University
{dsd95,mru8}@cornell.edu
Brent Edmunds
University of California, Los Angeles
brent.edmunds@math.ucla.edu
Abstract
We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient
method for solving nonconvex, nonsmooth optimization problems. SAPALM is the
first asynchronous parallel optimization method that provably converges on a large
class of nonconvex, nonsmooth problems. We prove that SAPALM matches the
best known rates of convergence ? among synchronous or asynchronous methods
? on this problem class. We provide upper bounds on the number of workers
for which we can expect to see a linear speedup, which match the best bounds
known for less complex problems, and show that in practice SAPALM achieves
this linear speedup. We demonstrate state-of-the-art performance on several matrix
factorization problems.
1
Introduction
Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to
finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is
heavy tailed. Hence as the number of processors increases, most processors waste most of their time
waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor
to update the global state of the algorithm and continue, ignoring read and write conflicts whenever
they occur. Occasionally one processor will erase the work of another; the hope is that the gain from
allowing processors to work at their own paces offsets the loss from a sloppy division of labor.
These asynchronous parallel optimization methods can work quite well in practice, but it is difficult
to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there
is nothing as practical as a good theory, which might explain how to set these parameters so as to
guarantee convergence.
In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous
algorithms for problems of the form
minimize
(x1 ,...,xm )?H1 ?...?Hm
f (x1 , . . . , xm ) +
m
X
rj (xj ),
(1)
j=1
where f is a continuously differentiable (C 1 ) function with an L-Lipschitz gradient, each rj is a lower
semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean
spaces (i.e., Hj = Rnj for some nj ? N). This problem class includes many (convex and nonconvex)
signal recovery problems, matrix factorization problems, and, more generally, any generalized low
rank model [20]. Following terminology from these domains, we view f as a loss function and each
rj as a regularizer. For example, f might encode the misfit between the observations and the model,
while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these
synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3].
These asynchronous variants depend on the same parameters as the synchronous methods, such as
a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution
here is to provide a convergence theory to guide the choice of those parameters within our control
(such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure
convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous
Proximal Alternating Linearized Minimization method, or SAPALM for short.
Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms
appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous
stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in
practice has been matched by progress in theory. Guaranteed convergence for these algorithms has
been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1].
Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular,
for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the
convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the
following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10];
block coordinate methods for smooth problems with separable, convex constraints [18]; block
coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient
methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A
general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still
missing from the theory. We aim to fill this gap.
Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that
provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block
coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method
of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of
convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries
no theoretical penalty for convergence speed. We test SAPALM on a few example problems and
compare to a synchronous implementation, showing a linear speedup.
Notation. Let m ? N denote the number of coordinate blocks. We let H = H1 ? . . . ? Hm . For
every x ? H, each partial gradient ?j f (x1 , . . . , xj?1 , ?, xj+1 , . . . , xm ) : Hj ? Hj is Lj -Lipschitz
continuous; we let L = minj {Lj } ? maxj {Lj } = L. The number ? ? NP
is the maximum
m
allowable delay. Define the aggregate regularizer r : H ? (??, ?] as r(x) = j=1 rj (xj ). For
each j ? {1, . . . , m}, y ? Hj , and ? > 0, define the proximal operator
1
2
prox?rj (y) := argmin rj (xj ) +
kxj ? yk
2?
xj ?Hj
For convex rj , prox?rj (y) is uniquely defined, but for nonconvex problems, it is, in general, a set.
We make the mild assumption that for all y ? Hj , we have prox?rj (y) 6= ?. A slight technicality
arises from our ability to choose among multiple elements of prox?rj (y), especially in light of the
stochastic nature of SAPALM. Thus, for all y, j and ? > 0, we fix an element
?j (y, ?) ? prox?rj (y).
(2)
By [17, Exercise 14.38], we can assume that ?j is measurable, which enables us to reason with expectations wherever they involve ?j . As shorthand, we use prox?rj (y) to denote the (unique) choice
?j (y, ?). For any random variable or vector X, we let Ek [X] = E X | xk , . . . , x0 , ? k , . . . , ? 0
denote the conditional expectation of X with respect to the sigma algebra generated by the history of
SAPALM.
2
Algorithm Description
Algorithm 1 displays the SAPALM method.
We highlight a few features of the algorithm which we discuss in more detail below.
2
Algorithm 1 SAPALM [Local view]
Input: x ? H
1: All processors in parallel do
2: loop
3:
Randomly select a coordinate block j ? {1, . . . , m}
4:
Read x from shared memory
5:
Compute g = ?j f (x) + ?j
6:
Choose stepsize ?j ? R++
7:
xj ? prox?j rj (xj ? ?j g)
. According to Assumption 3
. According to (2)
? Inconsistent iterates. Other processors may write updates to x in the time required to read x
from memory.
? Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the
likelihood that one update will be immediately erased by another, simultaneous update.
? Noise. The noise ? ? H is a random variable that we use to model injected noise. It can be
set to 0, or chosen to accelerate each iteration, or to avoid saddle points.
Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an
iteration counter k which is incremented each time a processor completes an update. This iteration
counter is not required by the processors themselves to compute the updates.
In Algorithm 1, a processor might not have access to the shared-memory?s global state, xk , at
iteration k. Rather, because all processors can continuously update the global state while other
processors are reading, local processors might only read the inconsistently delayed iterate xk?dk =
k?d
k?d
(x1 k,1 , . . . , xm k,m ), where the delays dk are integers less than ? , and xl = x0 when l < 0.
Algorithm 2 SAPALM [Global view]
Input: x0 ? H
1: for k ? N do
2:
Randomly select a coordinate block jk ? {1, . . . , m}
k?d
k?d
3:
Read xk?dk = (x1 k,1 , . . . , xm k,m ) from shared memory
k
k?dk
4:
Compute g = ?jk f (x
) + ?jkk
5:
Choose stepsize ?jkk ? R++
. According to Assumption 3
6:
for j = 1, . . . , m do
7:
if j = jk then
k
k k
. According to (2)
8:
xk+1
jk ? prox?jk rjk (xjk ? ?jk g )
k
9:
else
10:
xk+1
? xkj
j
2.1
Assumptions on the Delay, Independence, Variance, and Stepsizes
Assumption 1 (Bounded Delay). There exists some ? ? N such that, for all k ? N, the sequence of
coordinate delays lie within dk ? {0, . . . , ? }m .
Assumption 2 (Independence). The indices {jk }k?N are uniformly distributed and collectively IID.
They are independent from the history of the algorithm xk , . . . , x0 , ? k , . . . , ? 0 for all k ? N.
We employ two possible restrictions on the noise sequence ? k and the sequence of allowable stepsizes
?jk , all of which lead to different convergence rates:
Assumption 3 (Noise Regimes and Stepsizes). Let ?k2 := Ek k?k k2 denote the expected squared
norm of the noise, and let a ? (1, ?). Assume that Ek ? k = 0 and that there is a sequence of
weights {ck }k?N ? [1, ?) such that
?jk :=
(?k ? N) , (?j ? {1, . . . , m})
3
1
.
ack (Lj + 2L? m?1/2 )
which we choose using the following two rules, both of which depend on the growth of ?k :
P? 2
Summable.
=? ck ? 1;
k=0 ?k < ?
?-Diminishing. (? ? (0, 1))
?k2 = O((k + 1)?? )
=? ck = ?((k + 1)(1??) ).
More noise, measured by ?k , results in worse convergence rates and stricter requirements regarding
which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime,
interpolate between ?(1) and ?(k 1?? ) for any ? ? (0, 1). Larger stepsizes lead to convergence
rates of order O(k ?1 ), while smaller ones lead to order O(k ?? ).
2.2
Algorithm Features
Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors:
1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync,
k?d
k?d
delayed coordinates x1 k,1 , . . . , xm k,m .
k?d
k?d
2. Compute. Locally, compute the partial gradient ?jk f (x1 k,1 , . . . , xm k,m ).
3. Write. After computing the gradient, replace the jk th coordinate with
1
xk+1
? argmin rjk (y) + h?jk f (xk?dk ) + ?jkk , y ? xkjk i + k ky ? xkjk k2 .
jk
2?jk
y
Uncoordinated access eliminates waiting time for processors, which speeds up computation. The
processors are blissfully ignorant of any conflict between their actions, and the paradoxes these
k?d
k?d
conflicts entail: for example, the states x1 k,1 , . . . , xm k,m need never have simultaneously existed
in memory. Although we write the method with a global counter k, the asynchronous processors need
not be aware of it; and the requirement that the delays dk remain bounded by ? does not demand
coordination, but rather serves only to define ? .
What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to
allow and analyze noisy updates. The stochastic noise, ? k , captures three phenomena:
1. Computational Error. Noise due to random computational error.
2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7].
3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients.
Of course, the noise model also captures any combination of the above phenomena. The last one is,
perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient
version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic
gradients can be computed faster than their batch counterparts, allowing more frequent updates.
2.3
SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method
k?dk
In Algorithm
1, any
estimator ?f
; ?) of the gradient may be used, as long as
stochastic
(x k?d
k?dk
k?dk
Ek ?f (x
; ?) = ?f (x
), and Ek k?f (x k ; ?) ? ?f (xk?dk )k2 ? ? 2 . In particular,
if Problem 1 takes the form
m
1 X
minimize E? [f (x1 , . . . , xm ; ?)] +
rj (xj ),
x?H
m j=1
Pmk
k?dk
then, in Algorithm 2, the stochastic mini-batch estimator g k = m?1
; ?i ),
i=1 ?f (x
k
k?dk
k
where
?
are
IID,
may
be
used
in
place
of
?f
(x
)
+
?
.
A
quick
calculation
shows
that
i
??
Ek kg k ? ?f (xk?dk )k2 = O(m?1
).
Thus,
any
increasing
batch
size
m
=
?((k
+
1)
),
with
k
k
? ? (0, 1), conforms to Assumption 3.
When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic
gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for
analogous, synchronous algorithms.
4
3
Convergence Theorem
Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to
measure convergence (to a stationary point) by the expected violation of stationarity, which for us is
the (deterministic) quantity:
?
2 ?
m
X
1
Sk := E ?
k (wjk ? xkj ) + ?k
? ;
?j
j=1
where
(?j ? {1, . . . , m})
wjk
= prox?jk rj (xkj ? ?jk (?j f (xk?dk ) + ?jk )).
(3)
A reduction to the case r ? 0 and dk = 0 reveals that wjk ? xkj + ?jk ?jk = ??jk ?j f (xk ) and,
hence, Sk = E k?f (xk )k2 . More generally, wjk ? rjk + ?jk ?jk ? ??jk (?L rj (wjk ) + ?j f (xk?dk ))
where ?L rj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard
convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our
convergence rates for Sk can be converted to convergence rates for elements of ?L r(wk ) + ?f (wk ).
We present our main convergence theorem now and defer the proof to Section 4.
Theorem 1 (SAPALM Convergence Rates). Let {xk }k?N ? H be the SAPALM sequence created
by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ? N, if
{? k }k?N is
1. Summable, then
min Sk ? Ek?PT [Sk ] = O
k=0,...,T
m(L + 2L? m?1/2 )
T +1
;
2. ?-Diminishing, then
min Sk ? Ek?PT [Sk ] = O
k=0,...,T
m(L + 2L? m?1/2 ) + m log(T + 1)
(T + 1)??
;
where, for all T ? N, PT is the distribution {0, . . . , T } such that PT (X = k) ? c?1
k .
Effects of Delay and Linear Speedups. The m?1/2 term in the convergence rates presented in
Theorem
?1 prevents the delay ? from dominating our rates of convergence. In particular, as long as
? = O( m), the convergence rate in the synchronous (? = 0) and asynchronous cases are within a
small constant factor of each other. In that case, because the work per iteration in the synchronous
and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p
processors will converge nearly p times faster than PALM, since the iteration counter will be updated
p times as often. As a rule of thumb, ? is roughly ?
proportional to the number of processors. Hence
we can achieve a linear speedup on as many as O( m) processors.
3.1
The Asynchronous Stochastic Block Gradient Method
If the regularizer r is identically zero, then the noise ? k need not vanish in the limit. The following
theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant
minibatch size. See the supplemental material for a proof.
Theorem 2 (SAPALM Convergence Rates (r ? 0)). Let {xk }k?N ?H be the
SAPALM sequence
created by Algorithm 2 in the case that r ? 0. If, for all k ? N, {Ek k? k k2 }k?N is bounded (not
necessarily diminishing) and
1
,
(?a ? (1, ?)) , (?k ? N) , (?j ? {1, . . . , m})
?jk := ?
a k(Lj + 2M ? m?1/2 )
then for all T ? N, we have
min Sk ? Ek?PT [Sk ] = O
k=0,...,T
m(L + 2L? m?1/2 ) + m log(T + 1)
?
T +1
where PT is the distribution {0, . . . , T } such that PT (X = k) ? k ?1/2 .
5
,
4
Convergence Analysis
4.1
The Asynchronous Lyapunov Function
Key to the convergence of SAPALM is the following Lyapunov function, defined on H1+? , which
aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but
also the history of the algorithm over the delayed time steps: (?x(0), x(1), . . . , x(? ) ? H)
?
L X
?(x(0), x(1), . . . , x(? )) = f (x(0)) + r(x(0)) + ?
(? ? h + 1)kx(h) ? x(h ? 1)k2 .
2 m
h=1
This Lyapunov function appears in our convergence analysis through the following inequality, which
is proved in the supplemental material.
Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ? N, let z k =
(xk , . . . , xk?? ) ? H1+? . Then for all > 0, we have
!
m
2L?
1 X 1
k+1
k
Ek kwjk ? xkj + ?jk ?jk k2
? (1 + ) Lj + 1/2
Ek ?(z
) ? ?(z ) ?
k
2m j=1 ?j
m
m
X
?jk 1 + ?jk (1 + ?1 ) Lj + 2L? m?1/2 Ek k?jk k2
+
2m
j=1
where for all j ? {1, . . . , m}, we have wjk = prox?jk rj (xkj ? ?jk (?j f (xk?dk ) + ?jk )). In particular,
for ?k = 0, we can take = 0 and assume the last line is zero.
Notice that if ?k = = 0 and ?jk is chosen as suggested in Algorithm 2, the (conditional) expected
value of the Lyapunov function is strictly decreasing. If ?k is nonzero, the factor will be used in
concert with the stepsize ?jk to ensure that noise does not cause the algorithm to diverge.
4.2
Proof of Theorem 1
For either noise regime, we define, for all k ? N and j ? {1, . . . , m}, the factor := 2?1 (a ? 1).
With the assumed choice of ?jk and , Lemma 1 implies that the expected Lyapunov function decreases,
up to a summable residual: with Akj := wjk ? xkj + ?jk ?jk , we have
?
?
m
X
1
1
1
+
E ?(z k+1 ) ? E ?(z k ) ? E ?
1?
kAkj k2 ?
2m j=1 ?jk
ack
m
X
?jk 1 + ?jk (1 + ?1 ) Lj + 2L? m?1/2 E Ek k?jk k2
+
.
(4)
2m
j=1
Two upper bounds follow from the the definition of ?jk , the lower bound ck ? 1, and the straightforward inequalities (ack )?1 (L + 2M ? m?1/2 )?1 ? ?jk ? (ack )?1 (L + 2M ? m?1/2 )?1 :
?
?
m
X
1
1
1
1
1+
Sk ?
E?
1?
kAkj k2 ?
k
(1?(1+)a?1 )
ck
2m
ac
?
k
?1/2
j=1 j
2ma(L+2L? m
)
and
m
X
?jk 1 + ?jk (1 + ?1 ) Lj + 2L? m?1/2 Ek k?jk k2
(1 + (ack )?1 (1 + ?1 ))(?k2 /ck )
?
.
2m
2a(L + 2L? m?1/2 )
j=1
Now rearrange (4), use E ?(z k+1 ) ? inf x?H {f (x) + r(x)} and E ?(z 0 ) = f (x0 ) + r(x0 ), and
sum (4) over k to get
PT (1+(ack )?1 (1+?1 ))(?k2 /ck )
T
f (x0 ) + r(x0 ) ? inf x?H {f (x) + r(x)} + k=0
X
1
1
2a(L+2L? m?1/2 )
Sk ?
.
PT
PT
(1?(1+)a?1 )
?1
?1
c
k
c
?1/2
k=0 ck
k=0
2ma(L+2L? m
6
)
k=0 k
The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the
term Ek?PT [Sk ]. What remains to be shown is an upper bound on the right hand side, which we will
now call RT .
PT
PT
2
If the noise is summable, then ck ? 1, so k=0 c?1
k=0 ?k /ck < ?, which implies
k = (T +1) and
?1/2
?1
that RT = O(m(L + 2L? m
)(T + 1) ). If the noise is ?-diminishing, then ck = ? k (1??) ,
PT
so k=0 c?1
= ?((T + 1)? ) and, because ?k2 /ck = O(k ?1 ), there exists a B > 0 such that
k
PT
PT
2
?1
= O(log(T + 1)), which implies that RT = O((m(L + 2L? m?1/2 ) +
k=0 ?k /ck ?
k=0 Bk
??
m log(T + 1))(T + 1) ).
5
Numerical Experiments
In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as
low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as
the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10
cores per socket.
We use two different nonconvex matrix factorization problems to exhibit these properties, to which
we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise.
For each of our examples, we generate a matrix A ? Rn?n with iid standard normal entries, where
n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem
size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony
affects convergence.
1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize
1
argmin ||A ? X T Y ||2F + ?kXk1 + ?kY k1 ,
2
X,Y
(5)
where X ? Rd?n and Y ? Rd?n for some d ? N. We solve this problem using SAPALM
with no noise ? k = 0.
2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic
Gradients. We minimize
?
1
(6)
argmin ||A ? X T Y ||2F + ?(kXkFirm + kY kFirm ) + (kXk2F + kY k2F ),
2
2
X,Y
where X ? Rd?n , Y ? Rd?n , and k ? kFirm is the firm thresholding penalty proposed in [21]:
a nonconvex, nonsmooth function whose proximal operator truncates small values to zero
and preserves large values. We solve this problem using the stochastic gradient SAPALM
variant from Section 2.3.
In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the
SAPALM update decouples over the entries of each coordinate block. Each worker updates its
coordinate block (say, X) by cycling through the coordinates of X and updating each in turn,
restarting at a random coordinate after each cycle.
In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric,
SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves
for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But
SAPALM can add additional workers to increment the iteration counter more quickly, as seen in
Figure 1b, allowing SAPALM to outperform its single threaded variant.
We measure the speedup Sk (p) of SAPALM by the (relative) time for p workers to produce k iterates
Sk (p) =
Tk (p)
,
Tk (1)
(7)
where Tk (p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves
near linear speedup for a range of variable sizes d. (Dashes ? denote experiments not run.)
7
8
7
8
10
10
1
2
4
8
16
7
10
6
3.5
1
2
4
8
16
7
10
3.5
1
2
4
8
16
3
2.5
6
10
7
x 10
10
5
10
0
100
200
300
400
(a) Iterates vs objective
0
50
100
150
200
(b) Time (s) vs. objective
2.5
2
1.5
1.5
0.5
0
5
10
1
2
4
8
16
3
2
1
x 10
1
1
2
3
4
5
x 10
(c) Iterates vs. objective
0.5
0
5
10
15
20
25
(d) Time (s) vs. objective
Figure 1: Sparse PCA ((1a) and (1b)) and Firm Thresholding PCA ((1c) and (1d)) tests for d = 10.
threads
d=10
d=20
d=100
threads
1
65.9972 253.387
6144.9427
2
33.464
127.8973 ?
4
17.5415 67.3267
?
8
9.2376
34.5614
833.5635
16
4.934
17.4362
416.8038
Table 1: Sparse PCA timing for 16 iterations
by problem size and thread count.
d=10
d=20
d=100
1
1
1
1
2
1.9722 1.9812
?
4
3.7623 3.7635
?
8
7.1444 7.3315
7.3719
16
13.376 14.5322 14.743
Table 2: Sparse PCA speedup for 16 iterations
by problem size and thread count.
Deviations from linearity can be attributed to a breakdown in the abstraction of a ?shared memory?
computer: as each worker modifies the ?shared? variables X and Y , some communication is required
to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors
share L3 cache between all cores on the processor. All threads compete for the same L3 cache space,
slowing down each iteration. For small d, write conflicts are more likely; for large d, communication
to maintain cache coherency dominates.
6
Discussion
A few straightforward generalizations of our work are possible; we omit them to simplify notation.
Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a
maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors
accordingly, as in [14, Equation (3.2.10)].
Cluster points of {xk }k?N . Using the strategy employed in [5], it?s possible to show that all cluster
points of {xk }k?N are (almost surely) stationary points of f + r.
Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj
to vary: we can assume Lj (x1 , . . . , xj?1 , ?, xj+1 , . . . , xm )-Lipschitz continuity each partial gradient
?j f (x1 , . . . , xj?1 , ?, xj+1 , . . . , xm ) : Hj ? Hj , for every x ? H.
7
Conclusion
This paper presented SAPALM, the first asynchronous parallel optimization method that provably
converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for
SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear
speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous
or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These
results give specific guidance to ensure fast convergence of practical asynchronous methods on a
large class of important, nonconvex optimization problems, and pave the way towards a deeper
understanding of stability of these methods in the presence of noise.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In 2012 IEEE 51st IEEE
Conference on Decision and Control (CDC), pages 5451?5452, Dec 2012.
[2] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods, volume 23.
[3] J. Bolte, S. Sabach, and M. Teboulle. Proximal alternating linearized minimization for nonconvex and
nonsmooth problems. Mathematical Programming, 146(1-2):459?494, 2014.
[4] D. Davis. SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm. arXiv preprint
arXiv:1601.00698, 2016.
[5] D. Davis. The Asynchronous PALM Algorithm for Nonsmooth Nonconvex Problems. arXiv preprint
arXiv:1604.00526, 2016.
[6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang,
Q. V. Le, and A. Y. Ng. Large Scale Distributed Deep Networks. In F. Pereira, C. J. C. Burges, L. Bottou,
and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1223?1231.
Curran Associates, Inc., 2012.
[7] R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points?online stochastic gradient for tensor
decomposition. In Proceedings of The 28th Conference on Learning Theory, pages 797?842, 2015.
[8] S. Ghadimi, G. Lan, and H. Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic
composite optimization. Mathematical Programming, 155(1):267?305, 2016.
[9] M. Hong. A distributed, asynchronous and incremental algorithm for nonconvex optimization: An admm
based approach. arXiv preprint arXiv:1412.6058, 2014.
[10] X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization. In Advances in Neural Information Processing Systems, pages 2719?2727, 2015.
[11] J. Liu, S. J. Wright, C. R?, V. Bittorf, and S. Sridhar. An Asynchronous Parallel Stochastic Coordinate
Descent Algorithm. Journal of Machine Learning Research, 16:285?322, 2015.
[12] J. Liu, S. J. Wright, and S. Sridhar. An Asynchronous Parallel Randomized Kaczmarz Algorithm. arXiv
preprint arXiv:1401.4780, 2014.
[13] H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed Iterate
Analysis for Asynchronous Stochastic Optimization. arXiv preprint arXiv:1507.06970, 2015.
[14] Y. Nesterov. Introductory Lectures on Convex Optimization : A Basic Course. Applied optimization.
Kluwer Academic Publ., Boston, Dordrecht, London, 2004.
[15] Z. Peng, Y. Xu, M. Yan, and W. Yin. ARock: an Algorithmic Framework for Asynchronous Parallel
Coordinate Updates. arXiv preprint arXiv:1506.02396, 2015.
[16] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A Lock-Free Approach to Parallelizing Stochastic
Gradient Descent. In Advances in Neural Information Processing Systems, pages 693?701, 2011.
[17] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis, volume 317. Springer Science & Business
Media, 2009.
[18] P. Tseng. On the Rate of Convergence of a Partially Asynchronous Gradient Projection Algorithm. SIAM
Journal on Optimization, 1(4):603?619, 1991.
[19] J. Tsitsiklis, D. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient
optimization algorithms. IEEE Transactions on Automatic Control, 31(9):803?812, Sep 1986.
[20] M. Udell, C. Horn, R. Zadeh, and S. Boyd. Generalized Low Rank Models. arXiv preprint arXiv:1410.0342,
2014.
[21] J. Woodworth and R. Chartrand. Compressed sensing recovery via nonconvex shrinkage penalties. arXiv
preprint arXiv:1504.02923, 2015.
[22] Y. Xu and W. Yin. Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization. SIAM
Journal on Optimization, 25(3):1686?1716, 2015.
[23] H. Yun, H.-F. Yu, C.-J. Hsieh, S. V. N. Vishwanathan, and I. Dhillon. NOMAD: Non-locking, Stochastic
Multi-machine Algorithm for Asynchronous and Decentralized Matrix Completion. Proc. VLDB Endow.,
7(11):975?986, July 2014.
9
| 6428 |@word mild:1 version:2 norm:1 vldb:1 semicontinuous:1 linearized:4 decomposition:1 hsieh:1 carry:1 reduction:1 liu:3 existing:1 kwjk:1 current:1 written:1 devin:1 numerical:3 enables:1 remove:1 concert:1 update:13 v:4 stationary:2 slowing:1 accordingly:1 xk:22 short:1 core:3 iterates:5 math:1 bittorf:1 zhang:1 uncoordinated:1 mathematical:3 become:1 yuan:1 prove:3 shorthand:1 introductory:1 sync:1 introduce:2 x0:8 peng:1 expected:5 roughly:1 themselves:1 multi:1 decreasing:1 cache:4 increasing:2 erase:1 spain:1 notation:2 matched:1 bounded:4 linearity:1 medium:1 what:2 kg:1 argmin:4 textbook:1 supplemental:2 finding:1 nj:1 guarantee:2 every:2 growth:1 stricter:1 decouples:1 k2:18 control:4 omit:1 appear:1 bertsekas:2 before:1 local:2 timing:1 limit:1 niu:1 might:4 weakened:1 kxk2f:1 factorization:3 limited:1 range:1 practical:2 unique:1 horn:1 practice:3 block:21 kaczmarz:1 yan:1 composite:1 projection:1 boyd:1 idle:1 wait:1 get:1 operator:2 writing:1 restriction:1 measurable:1 deterministic:4 equivalent:1 missing:1 quick:1 modifies:1 straightforward:3 rnj:1 dean:1 ghadimi:1 convex:12 recovery:2 immediately:1 rule:2 estimator:2 fill:1 stability:1 coordinate:23 increment:1 analogous:1 limiting:1 updated:1 pt:16 papailiopoulos:1 programming:2 curran:1 associate:1 element:3 jk:46 updating:1 breakdown:1 kxk1:1 preprint:8 capture:3 cycle:1 ranzato:1 counter:5 incremented:1 decrease:1 removed:1 yk:1 locking:1 messy:1 nesterov:1 depend:2 solving:1 algebra:1 smart:1 division:1 kxj:1 accelerate:1 easily:1 sep:1 regularizer:4 fast:1 london:1 aggregate:2 firm:3 quite:1 whose:1 larger:2 solve:3 dominating:1 say:1 dordrecht:1 compressed:1 ability:1 noisy:1 online:1 sequence:6 differentiable:2 propose:1 frequent:1 loop:1 achieve:1 description:2 ky:4 wjk:7 los:1 convergence:35 cluster:2 requirement:2 produce:2 guaranteeing:1 converges:4 incremental:1 tk:3 depending:1 ac:1 fixing:1 completion:1 measured:1 progress:1 implies:3 lyapunov:6 stochastic:37 material:2 require:1 fix:1 generalization:1 strictly:1 hold:1 wright:3 normal:1 algorithmic:1 major:1 achieves:3 vary:1 purpose:1 proc:1 wet:1 apalm:2 coordination:1 jkk:3 minimization:4 hope:1 aim:1 rather:4 ck:14 avoid:1 hj:9 cornell:2 shrinkage:1 stepsizes:5 edmunds:2 endow:1 encode:2 rank:3 likelihood:1 contrast:1 damek:1 abstraction:1 lj:11 diminishing:4 provably:3 among:2 plan:1 art:1 special:1 aware:1 never:1 ng:1 yu:1 k2f:1 nearly:2 nonsmooth:11 np:1 simplify:1 few:4 employ:1 randomly:2 simultaneously:1 preserve:1 interpolate:1 delayed:6 ignorant:1 maxj:1 familiar:1 intended:1 maintain:2 stationarity:1 violation:1 light:2 rearrange:1 regularizers:4 partial:3 worker:7 conforms:1 euclidean:1 re:1 plotted:1 xjk:1 guidance:1 theoretical:3 weaken:1 xeon:2 teboulle:1 measuring:1 deviation:1 entry:2 delay:10 perturbed:1 proximal:11 st:1 recht:2 randomized:1 siam:2 akj:1 diverge:2 continuously:2 quickly:1 squared:1 choose:4 summable:4 possibly:1 huang:2 worse:1 brent:2 ek:16 li:1 sabach:1 converted:1 prox:10 waste:1 includes:1 wk:2 inc:1 rockafellar:1 h1:4 view:3 root:1 hogwild:1 analyze:1 parallel:12 defer:1 contribution:2 minimize:4 variance:1 socket:2 chartrand:1 misfit:1 thumb:1 iid:3 notoriously:1 processor:27 ago:1 history:3 explain:1 minj:1 simultaneous:1 whenever:1 definition:1 tucker:1 proof:3 attributed:1 athans:1 gain:2 newly:1 proved:1 adjusting:1 popular:1 proximalgradient:1 appears:1 follow:1 nomad:1 hand:2 minibatch:3 continuity:1 perhaps:1 asynchrony:3 arock:1 effect:1 requiring:1 deliberately:1 counterpart:1 hence:3 alternating:4 read:7 nonzero:1 dhillon:1 uniquely:1 davis:3 supermartingale:1 hong:1 generalized:2 allowable:3 ack:6 yun:1 demonstrate:1 performs:1 delivers:1 duchi:1 variational:1 xkj:7 common:1 enthusiasm:1 volume:2 slight:1 kluwer:1 rd:4 unconstrained:1 debug:1 automatic:1 l3:2 moving:1 access:3 entail:1 add:1 mania:1 own:1 inf:2 scenario:1 occasionally:1 nonconvex:25 inequality:4 continue:1 seen:1 additional:1 employed:1 surely:1 converge:1 aggregated:1 corrado:1 signal:1 july:1 multiple:1 sound:1 rj:22 reduces:2 full:1 smooth:3 faster:3 match:3 calculation:2 academic:1 long:2 serial:1 variant:6 basic:1 expectation:2 metric:1 arxiv:16 iteration:15 monga:1 agarwal:1 dec:1 subdifferential:2 addition:1 completes:1 else:1 eliminates:1 inconsistent:2 jordan:1 call:2 integer:1 structural:1 near:3 presence:1 ideal:1 yang:1 identically:1 iterate:2 xj:14 finish:2 independence:2 affect:1 escaping:1 idea:1 regarding:1 angeles:1 synchronous:10 thread:6 pca:6 penalty:3 cause:1 action:1 deep:2 generally:2 involve:1 tune:1 locally:1 generate:1 outperform:1 notice:2 coherency:2 per:2 pace:1 write:6 waiting:2 key:1 four:1 terminology:1 lan:1 monotone:1 year:1 sum:1 run:2 compete:1 injected:2 place:1 almost:1 decision:1 zadeh:1 bound:5 completing:1 guaranteed:2 dash:1 display:1 existed:1 occur:1 constraint:2 precisely:1 vishwanathan:1 ucla:1 speed:3 min:3 separable:1 speedup:11 palm:10 according:4 combination:1 smaller:1 remain:1 across:1 pan:1 wherever:1 equation:1 remains:1 discus:1 turn:1 count:3 madeleine:1 ge:1 serf:2 generalizes:1 decentralized:1 clapping:1 apply:1 stepsize:5 batch:5 weinberger:1 inconsistently:1 ensure:3 lock:3 k1:1 build:1 especially:1 woodworth:1 tensor:1 objective:5 quantity:1 strategy:1 rt:3 pave:1 cycling:1 exhibit:2 gradient:31 threaded:2 tseng:1 reason:1 rjk:3 code:1 index:1 mini:3 difficult:1 unfortunately:1 truncates:1 sigma:1 mink:1 implementation:1 publ:1 allowing:3 upper:3 pmk:1 observation:1 descent:4 jin:1 communication:2 paradox:1 rn:1 parallelizing:1 bk:1 required:3 conflict:5 california:1 quadratically:1 established:1 barcelona:1 nip:1 suggested:2 below:2 pattern:1 xm:11 regime:3 sparsity:1 reading:2 including:1 memory:7 overlap:1 natural:1 treated:1 regularized:1 business:1 residual:1 created:2 hm:2 understanding:1 relative:1 synchronization:2 loss:3 expect:2 highlight:1 cdc:1 lecture:1 generation:1 interesting:1 proportional:1 sloppy:1 thresholding:3 editor:1 share:1 heavy:1 course:2 last:4 asynchronous:37 free:3 tsitsiklis:2 guide:1 allow:3 side:2 understand:1 deeper:1 senior:1 burges:1 sparse:4 distributed:7 curve:1 transaction:1 restarting:1 technicality:1 confirm:2 global:5 reveals:1 assumed:1 continuous:1 tailed:1 sk:15 table:3 nature:1 ignoring:1 bottou:1 complex:1 necessarily:2 domain:1 main:1 noise:23 sridhar:2 nothing:1 x1:11 xu:2 intel:2 mao:1 nonnegativity:1 pereira:1 exercise:1 xl:1 lie:1 vanish:1 theorem:8 down:1 removing:1 specific:1 udell:2 showing:1 sensing:1 offset:1 dk:18 dominates:1 exists:2 ramchandran:1 demand:1 kx:1 gap:1 chen:1 boston:1 bolte:1 yin:2 saddle:4 likely:2 prevents:1 labor:1 partially:1 collectively:1 springer:1 ma:2 conditional:2 towards:1 lipschitz:4 shared:6 erased:1 hard:1 replace:1 admm:1 uniformly:1 lemma:2 select:2 arises:1 phenomenon:2 lian:1 avoiding:2 |
6,001 | 6,429 | Optimistic Bandit Convex Optimization
Mehryar Mohri
Courant Institute and Google
251 Mercer Street
New York, NY 10012
Scott Yang
Courant Institute
251 Mercer Street
New York, NY 10012
mohri@cims.nyu.edu
yangs@cims.nyu.edu
Abstract
1
We introduce the general and powerful scheme of predicting information re-use
in optimization algorithms. This allows us to devise a computationally efficient
algorithm for bandit convex optimization with new state-of-the-art guarantees for
both Lipschitz loss functions and loss functions with Lipschitz gradients. This is
the first algorithm admitting both a polynomial time complexity and a regret that is
polynomial in the dimension of the action space that improves upon the original
e T 11/16 d3/8 . Our
regret bound for Lipschitz loss functions, achieving a regret of O
algorithm further improves upon the best existing polynomial-in-dimension bound
(both computationally and in terms of regret) for loss functions with Lipschitz
e T 8/13 d5/3 .
gradients, achieving a regret of O
Introduction
Bandit convex optimization (BCO) is a key framework for modeling learning problems with sequential
data under partial feedback. In the BCO scenario, at each round, the learner selects a point (or action)
in a bounded convex set and observes the value at that point of a convex loss function determined by
an adversary. The feedback received is limited to that information: no gradient or any other higher
order information about the function is provided to the learner. The learner?s objective is to minimize
his regret, that is the difference between his cumulative loss over a finite number of rounds and that
of the loss of the best fixed action in hindsight.
The limited feedback makes the BCO setup relevant to a number of applications, including online
advertising. On the other hand, it also makes the problem notoriously difficult and requires the learner
to find a careful trade-off between exploration and exploitation. While it has been the subject of
extensive study in recent years, the fundamental BCO problem remains one of the most challenging
scenarios in machine learning where several questions concerning optimality guarantees remain open.
e 5/6 ) is achievable for bounded
The original work of Flaxman et al. [2005] showed that a regret of O(T
e 3/4 ) for Lipschitz loss functions (the latter bound is also given in [Kleinberg,
loss functions and of O(T
2004]), both of which are still the best known results given by explicit algorithms. Agarwal et al.
e 2/3 ) for loss functions that are both
[2010] introduced an algorithm that maintains a regret of O(T
Lipschitz and strongly convex, which is also still state-of-the-art. For functions that are Lipschitz
and also admit Lipschitz gradients, Saha and Tewari [2011] designed an algorithm with a regret of
e 2/3 ) regret, a result that was recently improved to O(T
e 5/8 ) by Dekel et al. [2015].
O(T
Here, we further improve upon these bounds both in the Lipschitz and Lipschitz gradient settings. By
incorporating the novel and powerful idea of predicting information re-use, we introduce an algorithm
e T 11/16 for Lipschitz loss functions. Similarly, our algorithm also achieves
with a regret bound of O
the best regret guarantee among computationally tractable algorithms for loss functions with Lipschitz
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
e T 8/13 . Both bounds admit a relatively mild dependency on the dimension of the action
gradients: O
space.
We note that the recent remarkable work by [Bubeck et al., 2015, Bubeck and Eldan, 2015] has
e 1/2 ), which matches the known
proven the existence of algorithms that can attain a regret of O(T
1/2
lower bound ?(T ) given by Dani et al.. Thus, the dependency of our bounds with respect to T is
not optimal. Furthermore, two recent unpublished manuscripts, [Hazan and Li, 2016] and [Bubeck
e 1/2 ). These results, once verified, would be
et al., 2016], present algorithms achieving regret O(T
ground-breaking contributions to the literature. However, unlike our algorithms, the regret bound
for both of these algorithms admits a large dependency on the dimension d of the action space:
exponential for [Hazan and Li, 2016], dO(9.5) for [Bubeck et al., 2016]. One hope is that the novel
ideas introduced by Hazan and Li [2016] (the application of the ellipsoid method with a restart button
and lower convex envelopes) or those by Bubeck et al. [2016] (which also make use of the restart
idea but introduces a very original kernel method) could be combined with those presented in this
paper to derive algorithms with the most favorable guarantees with respect to both T and d.
We begin by formally introducing our notation and setup. We then highlight some of the essential
ideas in previous work before introducing our new key insight. Next, we give a detailed description
of our algorithm for which we prove theoretical guarantees in several settings.
2
2.1
Preliminaries
BCO scenario
The scenario of bandit convex optimization, which dates back to [Flaxman et al., 2005], is a sequential
prediction problem on a convex compact domain K ? Rd . At each round t 2 [1, T ], the learner
selects a (possibly) randomized action xt 2 K and incurs the loss ft (xt ) based on a convex function
ft : K ! R chosen by the adversary. We assume that the adversary is oblivious, so that the loss
functions are independent of the player?s actions. The objective of the learner is to minimize his
regret with respect to the optimal static action in hindsight, that is, if we denote by A the learner?s
randomized algorithm, the following quantity:
" T
#
T
X
X
RegT (A) = E
ft (xt )
min
ft (x).
(1)
x2K
t=1
t=1
We will denote by D the diameter of the action space K in the Euclidean norm: D = supx,y2K kx
yk2 . Throughout this paper, we will often use different induced norms. We will denote by k ? kA
the normpinduced by a symmetric positive definite (SPD) matrix A 0, defined for all x 2 Rd by
kxkA = x> Ax. Moreover, we will denote by k ? kA,? its dual norm, given by k ? kA 1 . To simplify
the notation, we will write k ? kx instead of k ? kr2 R(x) , when the convex and twice differentiable
function R : int(K) ! R is clear from the context. Here, int(K) is the set interior of K.
We will consider different levels of regularity for the functions ft selected by the adversary. We will
always assume that they are uniformly bounded by some constant C > 0, that is |ft (x)| ? C for all
t 2 [1, T ] and x 2 K, and, by shifting the loss functions upwards by at most C, we will also assume,
without loss of generality, that they are non-negative: ft 0, for all t 2 [1, T ]. Moreover, we will
always assume that ft is Lipschitz on K (henceforth denoted C0,1 (K)):
8t 2 [1, T ], 8x, y 2 K, |ft (x) ft (y)| ? Lkx yk2 .
In some instances, we will further assume that the functions admit H-Lipschitz gradients on the
interior of the domain (henceforth denoted C1,1 (int(K))):
9H > 0 : 8t 2 [1, T ], 8x, y 2 int(K), krft (x) rft (y)k2 ? Hkx yk2 .
Since ft is convex, it admits a subgradient at any point in K. We denote by gt one element of
the subgradient at the point xt 2 K selected by the learner at round t. When the losses are C1,1 ,
the only element of the subgradient is the gradient, and gt = rft (xt ). We will use the shorthand
Pt
v1:t = s=1 vs to denote the sum of t vectors v1 , . . . , vt . In particular, g1:t will denote the sum of
the subgradients gs for s 2 [1, t].
Lastly, we will denote by B1 (0) = x 2 Rd : kxk2 ? 1 ? Rd the d-dimensional Euclidean ball of
radius one and by @B1 (0) the unit sphere.
2
2.2
Follow-the-regularized-leader template
A standard algorithm in online learning, both for the bandit and full-information setting is the
follow-the-regularized-leader (FTRL) algorithm. At each round, the algorithm selects the action that
minimizes the cumulative linearized loss augmented with a regularization term R : K ! R. Thus,
the action xt+1 is defined as follows:
>
xt+1 = argmin ?g1:t
x + R(x),
x2K
where ? > 0 is a learning rate that determines the tradeoff between greedy optimization and
regularization.
If we had access to the subgradients at each round, then, FTRL with R(x) = kxk22 and ? = p1T
p
would yield a regret of O( dT ), which is known to be optimal. But, since we only have access to the
loss function values ft (xt ) and since the loss functions change at each round, a more refined strategy
is needed.
2.2.1
One-point gradient estimates and surrogate losses
One key insight into the bandit convex optimization problem, due to Flaxman et al. [2005], is that the
subgradient of a smoothed version of the loss function can be estimated by sampling and rescaling
around the point the algorithm originally intended to play.
Lemma 1 ([Flaxman et al., 2005, Saha and Tewari, 2011]). Let f : K ! R be an arbitrary function
(not necessarily differentiable) and let U (@B1 (0)) denote the uniform distribution over the unit
sphere. Then, for any > 0 and any SPD matrix A
0, the function fb defined for all x 2 K
b
by f (x) = Eu?U (@B1 (0)) [f (x + Au)] is differentiable over int(K) and, for any x 2 int(K),
gb = d f (x + Au)A 1 u is an unbiased estimate of rfb(x):
?
d
E
f (x + Au)A 1 u = rfb(x).
u?U (@B1 (0))
The result shows that if at each round t we sample ut ? U (@B1 (0)), define an SPD matrix At and
play the point yt = xt + At u (assuming that yt 2 K), then gbt = d f (xt + At ut )At 1 ut is an
unbiased estimate of the gradient of fb at the point xt originally intended: E[b
gt ] = rfb(xt ). Thus, we
>
can use FTRL with these smoothed gradient estimates: xt+1 = argminx2K ? gb1:t
x + R(x), at the
b
cost of the approximation error from ft to ft . Furthermore, the norm of these estimate gradients can
be bounded.
Lemma 2. Let > 0, ut 2 @B1 (0) and At 0, then the norm of gbt = d f (xt + At ut )At 1 ut can
2
be bounded as follows: kb
gt k2A2 ? d2 C 2 .
t
Proof. Since ft is bounded by C, we can write kb
gt k2A2 ?
t
d2
2
C 2 ut At 1 A2t At 1 ut ?
d2
2
C 2.
This gives us a bound on the Lipschitz constant of fbt in terms of d, , and C.
2.2.2
Self-concordant barrier as regularization
When sampling to derive a gradient estimate, we need to ensure that the point sampled lies within the
feasible set K. A second key idea in the BCO problem, due to Abernethy et al. [2008], is to design
ellipsoids that are always contained in the feasible sets. This is done by using tools from the theory
of interior-point methods in convex optimization.
Definition 1 (Definition 2.3.1 [Nesterov and Nemirovskii, 1994]). Let K ? Rd be closed convex, and
let ? 0. A C 3 function R : int(K) ! R is a ?-self-concordant barrier for K if for any sequence
d
(zs )1
s=1 with zs ! @K, we have R(zs ) ! 1, and if for all x 2 int(K), and y 2 R , the following
inequalities hold:
|r3 R(x)[y, y, y]| ? 2kyk3x ,
3
|rR(x)> y| ? ? 1/2 kykx .
Since self-concordant barriers are preserved under translation, we will always assume for convenience
that minx2K R(x) = 0.
Nesterov and Nemirovskii [1994] show that any d-dimensional closed convex set admits an O(d)self-concordant barrier. This allows us to always choose a self-concordant barrier as regularization.
We will use several other key properties of self-concordant barriers in this work, all of which are
stated precisely in Appendix 7.1.
3
Previous work
The original paper by Flaxman et al. [2005] sampled indiscriminately around spheres and projected
e T 3/4 for C0,1 loss functions.
back onto the feasible set at each round. This yielded a regret of O
1,1
The follow-up work of Saha and Tewari [2011] showed that for C loss functions, one can run FTRL
with a self-concordant barrier as regularization and sample around the Dikin ellipsoid to attain an
e T 2/3 .
improved regret bound of O
More recently, Dekel et al. [2015] showed that by averaging the smoothed gradient estimates
e T 5/8 .
and still using the self-concordant barrier as regularization, one can achieve a regret of O
P
k
1
Specifically, denote by g?t = k+1
bt i the average of the past k + 1 incurred gradients, where
i=0 g
gbt i = 0 for t i ? 0. Then we can play FTRL on these averaged smoothed gradient estimates:
xt+1 = argmin2K ??
gt> x + R(x), to attain the better guarantee.
Abernethy and Rakhlin [2009] derive a generic estimate for FTRL algorithms with self-concordant
barriers as regularization:
Lemma 3 ([Abernethy and Rakhlin, 2009]-Theorem 2.2-2.3). Let K be a closed convex set in
Rd and let R be a ?-self-concordant barrier for K. Let {gt }Tt=1 ? Rd and ? > 0 be such that
>
?kgt kxt ,? ? 1/4 for all t 2 [1, T ]. Then, the FTRL update xt+1 = argminx2K g1:t
x + R(x) admits
the following guarantees:
kxt
xt+1 kxt ? 2?kgt kxt ,? ,
8x 2 K,
T
X
t=1
gt> (xt
x) ? 2?
T
X
1
kgt k2xt ,? + R(x).
?
t=1
By Lemma 2, if we use FTRL with smoothed gradients, then the upper bound in this lemma can be
further bounded by
T
X
1
C 2 d2
1
2?
kb
gt k2xt ,? + R(x) ? 2?T 2 + R(x).
?
?
t=1
Furthermore, the regret is then bounded by the sum of this upper bound and the cost of approximating
ft with fbt . On the other hand, Dekel et al. [2015] showed that if we used FTRL with averaged
smoothed gradients instead, then the upper bound in this lemma can be bounded as
?
?
T
X
1
32C 2 d2
1
2
2 2
2?
k?
gt kxt ,? + R(x) ? 2?T
+ 2D L + R(x).
2 (k + 1)
?
?
t=1
The extra factor (k + 1) in the denominator, at the cost of now approximating ft with f?t , is what
contributes to their improved regret result.
In general, finding surrogate losses that can both be approximated accurately and admit only a mild
variance is a delicate task, and it is not clear how the constructions presented above can be improved.
4
4.1
Algorithm
Predicting the predictable
Rather than designing a newer and better surrogate loss, our strategy will be to exploit the structure of
the current state-of-the-art method. Specifically, we draw upon the technique of predictable sequences
from [Rakhlin and Sridharan, 2013]. The idea here is to allow the learner to preemptively ?guess? the
4
gradient at the next step and optimize for this in the FTRL update. Specifically, if get+1 is an estimate
of the time t + 1 gradient gt+1 based on information up to time t, then the learner should play:
xt+1 = argmin(g1:t + get+1 )> x + R(x).
x2K
This optimistic FTRL algorithm admits the following guarantee:
Lemma 4 (Lemma 1 [Rakhlin and Sridharan, 2013]). Let K be a closed convex set in Rd , and let R
be a ?-self-concordant barrier for K. Let {gt }Tt=1 ? Rd and ? > 0 such that ?kgt get kxt ,? ? 1/4
8t 2 [1, T ]. Then the FTRL update xt+1 = argminx2K (g1:t + get+1 )> x + R(x) admits the following
guarantee:
T
T
X
X
1
8x 2 K,
gt> (xt x) ? 2?
kgt get k2xt ,? + R(x).
?
t=1
t=1
In general, it is not clear what would be a good prediction candidate. Indeed, this is why Rakhlin
and Sridharan [2013] called this algorithm an ?optimistic? FTRL. However, notice that if we elect
to play the averaged smoothed losses as in [Dekel et al., 2015], then the update at each time is
Pk
Pk
1
1
g?t = k+1
bt i . This implies that the time t + 1 gradient is g?t+1 = k+1
bt+1 i , which
i=0 g
i=0 g
includes the smoothed gradients from time t + 1 down to time t (k 1). The key insight here is
that at time t, all but the (t + 1)-th gradient are known!
This means that if we predict
k
get+1 =
1 X
gbt+1
k + 1 i=0
k
1
1 X
gbt+1 =
gbt+1 i ,
k+1
k + 1 i=1
i
then the first term in the bound of Lemma 4 will be in terms of
k
gt
get =
1 X
gbt
k + 1 i=0
k
1 X
gbt
k + 1 i=1
i
i
=
1
gbt .
k+1
In other words, all but the time t smoothed gradient will cancel out. Essentially, we are predicting
the predictable portion of the averaged gradient and guaranteeing that the optimism will pay off.
1
Moreover, where we gained a factor of k+1
in the averaged loss case, we should expect to gain a
1
factor of (k+1)2 by using this optimistic prediction.
Note that this technique of optimistically predicting the variance reduction is widely applicable. As
alluded to with the reference to [Schmidt et al., 2013], many variance reduction-type techniques,
particularly in stochastic optimization, use historical information in their estimates (e.g. SVRG
[Johnson and Zhang, 2013], SAGA [Defazio et al., 2014]). In these cases, it is possible to ?predict?
the information re-use and improve the convergence rates of each algorithm.
4.2
Description and pseudocode
Here, we give a detailed description of our algorithm, O PTIMISTIC BCO. At each round t, the
algorithm uses a sample ut from the uniform distribution over the unit sphere to define an unbiased
estimate of the gradient of fbt , a smoothed version of the loss function ft , as described in Section 2.2.1:
d
gbt
ft (yt )(r2 R(xt )) 1/2 ut . Next, the trailing average of these unbiased estimates over a fixed
Pk
1
window of length k + 1 is computed: g?t = k+1
bt i . The remaining steps executed at each
i=0 g
round coincide with the Follow-the-Regularized-Leader update with a self-concordant barrier used
as a regularizer, augmented with an optimistic prediction of the next round?s trailing average. As
described in Section 4.1, all but one of the terms in the trailing average are known and we predict
their occurence:
k
get+1 =
1 X
gbt+1 i ,
k + 1 i=1
>
xt+1 = argmin ? (?
g1:t + get+1 ) x + R(x).
x2K
Note that Theorem 3 implies that the actual point we play, yt , is always a feasible point in K. Figure 1
presents the pseudocode of the algorithm.
5
O PTIMISTIC BCO(R, , ?, k, x1 )
1 for t
1 to T do
2
ut
S AMPLE(U (@B1 (0)))
1
3
yt
xt + (r2 R(xt )) 2 ut
4
P LAY(yt )
5
ft (yt )
R ECEIVE L OSS(yt )
1
d
6
gbt
ft (yt )(r2 R(xt )) 2 ut
P
k
1
7
g?t
bt i
i=0 g
k+1
P
k
1
8
get+1
bt+1 i
i=1 g
k+1
>
9
xt+1
argminx2K ? (?
g1:t + get+1 ) x + R(x)
PT
10 return t=1 ft (yt )
Figure 1: Pseudocode of O PTIMISTIC BCO, with R : int(K) ! R,
x1 2 K.
5
2 (0, 1], ? > 0, k 2 Z, and
Regret guarantees
In this section, we state our main results, which are regret guarantees for O PTIMISTIC BCO in the
C0,1 and C1,1 cases. We also highlight the analysis and proofs for each regime.
5.1
Main results
The following is our main result for the C0,1 case.
Theorem 1 (C0,1 Regret). Let K ? Rd be a convex set with diameter D and (ft )Tt=1 a sequence of
loss functions with each ft : K ! R+ C-bounded and L-Lipschitz. Let R be a ?-self-concordant
barrier for K. Then, for ?k ? 12Cd , the regret of O PTIMISTIC BCO can be bounded as follows:
Ck
2Cd2 ?T
1
RegT (O PTIMISTIC BCO) ? ?LT + L DT +
+ 2
+ log(1/?)
2
2
(k + 1)
?
"
p #
p
p 1/2 p
48d k
+ LT 2?D
3L + 2DLk +
.
In particular, for ? = T 11/16 d
for the regret of the algorithm:
3/8
, =T
5/16 3/8
d
, k = T 1/8 d1/4 , the following guarantee holds
?
?
e T 11/16 d3/8 .
RegT (O PTIMISTIC BCO) = O
The above result is the first improvement on the regret of Lipschitz losses in terms of T since the
original algorithm of Flaxman et al. [2005] that is realizable from a concrete algorithm as well as
polynomial in both dimension and time (both computationally and in terms of regret).
Theorem 2 (C1,1 Bound). Let K ? Rd be a convex set with diameter D and (ft )Tt=1 a sequence of
loss functions with each ft : K ! R+ C-bounded, L-Lipschitz and H-smooth. Let R be a ?-selfconcordant barrier for K. Then, for ?k ? 12d , the regret of O PTIMISTIC BCO can be bounded as
follows:
RegT (O PTIMISTIC BCO) ? ?LT + H 2 D2 T
"p
#
p
3L1/2 p
48d
1
+ (T L + DHT )2?kD
+ 2DL + p
+ log(1/?) + Ck + ?
k
?
k
In particular, for ? = T 8/13 d
for the regret of the algorithm:
5/6
, =T
5/26 1/3
d
, k = T 1/13 d5/3 , the following guarantee holds
?
?
e T 8/13 d5/3 .
RegT (O PTIMISTIC BCO) = O
6
d2 T
.
2 (k + 1)2
This result is currently the best polynomial-in-time regret bound that is also polynomial in the
dimension of the action space (both computationally and in terms of regret). It improves upon the
work of Saha and Tewari [2011] and Dekel et al. [2015].
We now explain the analysis of both results, starting with Theorem 1 for C0,1 losses.
5.2
C0,1 analysis
Our analysis proceeds in two steps. We first modularize the cost of approximating the original losses
ft (yt ) incurred with the averaged smoothed losses that we treat as surrogate losses. Then we show
that the algorithm minimizes the regret against the surrogate losses effectively. The proofs of all
lemmas in this section are presented in Appendix 7.2.
Lemma 5 (C0,1 Structural bound on true losses in terms of smoothed losses). Let (ft )Tt=1 be a
sequence of loss functions, and assume that ft : K ! R+ is C-bounded and L-Lipschitz, where
K ? Rd . Denote
d
fbt (x) =
E
[ft (x + At u)], gbt = ft (yt )At 1 ut , yt = xt + At ut
u?U (@B1 (0))
PT
for arbitrary At , , and ut .
Let x? = argminx2K t=1 ft (x), and let x?? 2
argminy2K,dist(y,@K)>? ky x? k. Assume that we play yt at every round. Then the following
structural estimate holds:
RegT (A) = E[
T
X
ft (yt )
t=1
ft (x? )] ? ?LT + 2L DT +
T
X
t=1
E[fbt (xt )
fbt (x?? )].
Thus, at the price of ?LT + 2L DT , it suffices to look at the performance of the averaged losses for
the algorithm. Notice that the only assumptions we have made so far are that we play points sampled
on an ellipsoid around the desired point scaled by and that the loss functions are Lipschitz.
Lemma 6 (C0,1 Structural bound on smoothed losses in terms of averaged losses). Let (ft )Tt=1 be
a sequence of loss functions, and assume that ft : K ! R+ is C-bounded and L-Lipschitz, where
K ? Rd . Denote
d
fbt (x) =
E
[ft (x + At u)], gbt = ft (yt )At 1 ut , yt = xt + At ut
u?U (@B1 (0))
for arbitrary At , , and ut .
Let x? = argminx2K
?
argminy2K,dist(y,@K)>? ky x k. Furthermore, denote
k
f?t (x) =
1 Xb
ft i (x),
k + 1 i=0
PT
t=1
ft (x), and let x??
2
k
g?t =
1 X
gbt i .
k + 1 i=0
Assume that we play yt at every round. Then we have the structural estimate:
?
T
T
X
X
?
Ck
E fbt (xt ) fbt (x?? ) ?
+ LT
sup
E[kxt i xt k2 ] +
E g?t> (xt
2
t2[1,T ],i2[0,k^t]
t=1
t=1
?
x?? ) .
While we use averaged smoothed losses as in [Dekel et al., 2015], the analysis in this lemma is
actually somewhat different. Because Dekel et al. [2015] always assume that the loss functions are in
C1,1 , they elect to use the following decomposition:
fbt (xt )
f?t (xt ) + f?t (xt ) f?t (x?? ) + f?t (x?? ) fbt (x?? ).
Pk
Pk
1
1
b
b
This is because they can relate rf?t (x) = k+1
?t = k+1
i=0 rft i (x? ) to g
i=0 rft i (xt i )
0,1
using the fact that the gradients are Lipschitz. Since the gradients of C functions are not Lipschitz,
we cannot use the same analysis. Instead, we use the decomposition
fbt (xt )
fbt (x?? ) = fbt (xt )
fbt (x?? ) = fbt (xt )
fbt i (xt i ) + fbt i (xt i )
The next lemma affirms that we do indeed get the improved
predictable component of the average gradient.
7
f?t (x?? ) + f?t (x?? )
1
(k+1)2
fbt (x?? ).
factor from predicting the
Lemma 7 (C0,1 Algorithmic bound on the averaged losses). Let (ft )Tt=1 be a sequence of loss
functions, and assume that ft : K ! R+ is C-bounded and L-Lipschitz, where K ? Rd . Let
PT
x? = argminx2K t=1 ft (x), and let x?? 2 argminy2K,dist(y,@K)>? ky x? k. Assume that we play
according to the algorithm with ?k ? 12Cd . Then we maintain the following guarantee:
T
X
t=1
?
E g?t> (xt
?
2Cd2 ?T
1
x?? ) ? 2
+ R(x?? ).
2
(k + 1)
?
So far, we have demonstrated a bound on the regret of the form:
Ck
RegT (A) ? ?LT + 2L DT +
+ LT
sup
E[kxt i
2
t2[T ],i2[k^t]
xt k2 ] +
2Cd2 ?T
1
+ R(x? ).
2 (k + 1)2
?
Thus, it remains to find a tight bound on supt2[1,T ],i2[0,k^t] E[kxt i xt k2 ], which measures the
stability of the actions across the history that we average over. This result is similar to that of Dekel
et al. [2015], except that we additionally need to account for the optimistic gradient prediction used.
Lemma 8 (C0,1 Algorithmic bound on the stability of actions). Let (ft )Tt=1 be a sequence of loss
functions, and assume that ft : K ! R+ is C-bounded and L-Lipschitz, where K ? Rd . Assume
that we play according to the algorithm with ?k ? 12Cd . Then the following estimate holds:
!
p 1/2
p
p
3L
48Cd
E[kxt i xt k2 ] ? 2?kD
+ 2DL + p
.
k
k
Proof. [of Theorem 1] Putting all the pieces together from Lemmas 5, 6, 7, 8, shows that
p
p
?
p 1/2 p
Ck
2Cd2 ?T
1
48Cd k
RegT (A)??LT +L DT +
+ 2
+ R(x? )+LT 2?D 3L + 2DLk+
.
2
(k + 1)2 ?
Since x? is at least ? away from the boundary, it follows from [Abernethy and Rakhlin, 2009] that
R(x? ) ? ? log(1/?). Plugging in the stated quantities for ?, k, and yields the result.
5.3
C1,1 analysis
The analysis of the C1,1 regret bound is similar to the C0,1 case. The only difference is that we leverage
the higher regularity of the losses to provide a more refined estimate on the cost of approximating ft
with f?t . Apart from that, we will reuse the bounds derived in Lemmas 6, 7, and 8. The proof of the
following lemma, along with that of Theorem 2, is provided in Appendix 7.3.
Lemma 9 (C1,1 Structural bound on true losses in terms of smoothed losses). Let (ft )Tt=1 be a
sequence of loss functions, and assume that ft : K ! R+ is C-bounded, L-Lipschitz, and H-smooth,
where K ? Rd . Denote
d
fbt (x) =
E
[ft (x + At u)], gbt = ft (yt )A 1 ut , yt = xt + At ut
t
u?U (@B1 (0))
PT
for arbitrary At , , and ut .
Let x? = argminx2K t=1 ft (x), and let x?? 2
argminy2K,dist(y,@K)>? ky x? k. Assume that we play yt at every round. Then the following
structural estimate holds:
T
T
X
X
RegT (A) = E[
ft (yt ) ft (x? )] ? ?LT + 2H 2 D2 T +
E[fbt (xt ) fbt (x?? )].
t=1
6
t=1
Conclusion
We designed a computationally efficient algorithm for bandit convex optimization admitting stateof-the-art guarantees for C0,1 and C1,1 loss functions. This was achieved using the general and
powerful technique of predicting predictable information re-use. The ideas we describe here are
directly applicable to other areas of optimization, in particular stochastic optimization.
Acknowledgements
This work was partly funded by NSF CCF-1535987 and IIS-1618662 and NSF GRFP DGE-1342536.
8
References
J. Abernethy and A. Rakhlin. Beating the adaptive bandit with high probability. In COLT, 2009.
J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit
linear optimization. In COLT, pages 263?274, 2008.
A. Agarwal, O. Dekel, and L. Xiao. Optimal algorithms for online convex optimization with
multi-point bandit feedback. In COLT, pages 28?40, 2010.
S. Bubeck and R. Eldan. Multi-scale exploration of convex functions and bandit convex optimization.
CoRR, abs/1507.06580, 2015.
S. Bubeck, O. Dekel, T. Koren, and Y. Peres. Bandit convex optimization: sqrt {T} regret in one
dimension. CoRR, abs/1502.06398, 2015.
S. Bubeck, R. Eldan, and Y. T. Lee. Kernel-based methods for bandit convex optimization. CoRR,
abs/1607.03084, 2016.
V. Dani, T. P. Hayes, and S. M. Kakade. Stochastic linear optimization under bandit feedback.
A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support
for non-strongly convex composite objectives. In NIPS, pages 1646?1654, 2014.
O. Dekel, R. Eldan, and T. Koren. Bandit smooth convex optimization: Improving the bias-variance
tradeoff. In NIPS, pages 2908?2916, 2015.
A. D. Flaxman, A. T. Kalai, and H. B. McMahan. Online convex optimization in the bandit setting:
Gradient descent without a gradient. In SODA, pages 385?394, 2005.
E. Hazan and Y. Li. An optimal algorithm for bandit convex optimization. CoRR, abs/1603.04350,
2016.
R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In NIPS, pages 315?323, 2013.
R. D. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In Advances in
Neural Information Processing Systems, pages 697?704, 2004.
Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York,
NY, USA, 2004.
Y. Nesterov and A. Nemirovskii. Interior-point Polynomial Algorithms in Convex Programming.
Studies in Applied Mathematics. Society for Industrial and Applied Mathematics, 1994. ISBN
9781611970791.
A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In COLT, pages 993?1019,
2013.
A. Saha and A. Tewari. Improved regret guarantees for online smooth convex optimization with
bandit feedback. In AISTATS, pages 636?642, 2011.
M. W. Schmidt, N. L. Roux, and F. R. Bach. Minimizing finite sums with the stochastic average
gradient. CoRR, abs/1309.2388, 2013.
9
| 6429 |@word mild:2 exploitation:1 version:2 achievable:1 polynomial:7 norm:5 dekel:11 c0:13 open:1 d2:8 linearized:1 decomposition:2 incurs:1 reduction:3 ftrl:13 past:1 existing:1 ka:3 current:1 dikin:1 designed:2 update:5 v:1 preemptively:1 greedy:1 selected:2 guess:1 grfp:1 zhang:2 along:1 prove:1 shorthand:1 introductory:1 introduce:2 indeed:2 os:1 dist:4 multi:2 actual:1 armed:1 window:1 provided:2 spain:1 bounded:18 begin:1 notation:2 moreover:3 what:2 argmin:3 minimizes:2 z:3 hindsight:2 finding:1 guarantee:16 every:3 k2:5 scaled:1 unit:3 before:1 positive:1 treat:1 minx2k:1 optimistically:1 twice:1 au:3 challenging:1 limited:2 dlk:2 averaged:10 regret:38 definite:1 area:1 attain:3 composite:1 word:1 get:12 onto:1 interior:4 cannot:1 convenience:1 kr2:1 context:1 a2t:1 optimize:1 demonstrated:1 yt:22 starting:1 convex:33 roux:1 insight:3 d5:3 his:3 stability:2 pt:6 play:12 construction:1 programming:1 us:1 designing:1 element:2 approximated:1 particularly:1 lay:1 ft:53 eu:1 trade:1 observes:1 predictable:6 complexity:1 nesterov:4 tight:2 predictive:1 upon:5 learner:10 regularizer:1 fast:1 gbt:16 describe:1 refined:2 abernethy:6 widely:1 g1:7 online:6 sequence:10 differentiable:3 rr:1 kxt:10 isbn:1 relevant:1 date:1 achieve:1 description:3 ky:4 convergence:1 regularity:2 guaranteeing:1 incremental:1 derive:3 received:1 implies:2 radius:1 kgt:5 stochastic:5 kb:3 exploration:2 suffices:1 preliminary:1 hold:6 around:4 rfb:3 ground:1 algorithmic:2 predict:3 trailing:3 achieves:1 continuum:1 favorable:1 applicable:2 currently:1 tool:1 hope:1 dani:2 always:7 rather:1 ck:5 kalai:1 ax:1 derived:1 improvement:1 industrial:1 hkx:1 realizable:1 bt:6 bandit:19 selects:3 among:1 dual:1 colt:4 denoted:2 stateof:1 art:4 rft:4 once:1 sampling:2 look:1 cancel:1 nearly:1 t2:2 simplify:1 oblivious:1 saha:5 intended:2 delicate:1 maintain:1 ab:5 introduces:1 admitting:2 xb:1 partial:1 euclidean:2 re:4 desired:1 theoretical:1 y2k:1 instance:1 modeling:1 cost:5 introducing:2 uniform:2 johnson:2 dependency:3 supx:1 combined:1 fundamental:1 randomized:2 lee:1 off:2 together:1 concrete:1 choose:1 possibly:1 henceforth:2 admit:4 rescaling:1 return:1 li:4 account:1 includes:1 int:9 bco:16 piece:1 closed:4 optimistic:6 hazan:5 sup:2 portion:1 maintains:1 contribution:1 minimize:2 variance:5 yield:2 accurately:1 advertising:1 notoriously:1 history:1 sqrt:1 explain:1 definition:2 against:1 proof:5 static:1 sampled:3 gain:1 ut:22 improves:3 actually:1 back:2 manuscript:1 higher:2 courant:2 dt:6 follow:4 originally:2 improved:6 done:1 strongly:2 generality:1 furthermore:4 lastly:1 hand:2 google:1 dge:1 usa:1 unbiased:4 true:2 ccf:1 regularization:7 symmetric:1 i2:3 round:15 self:13 elect:2 tt:9 l1:1 upwards:1 novel:2 recently:2 pseudocode:3 rd:16 mathematics:2 similarly:1 had:1 funded:1 access:2 yk2:3 lkx:1 gt:14 recent:3 showed:4 apart:1 scenario:4 inequality:1 vt:1 devise:1 somewhat:1 ii:1 full:1 gb1:1 smooth:4 match:1 bach:2 sphere:4 concerning:1 plugging:1 prediction:5 basic:1 denominator:1 essentially:1 kernel:2 agarwal:2 achieved:1 c1:9 preserved:1 envelope:1 extra:1 unlike:1 subject:1 induced:1 ample:1 sridharan:4 structural:6 yang:2 leverage:1 spd:3 competing:1 idea:7 tradeoff:2 regt:9 optimism:1 defazio:2 gb:1 reuse:1 accelerating:1 york:3 action:14 tewari:5 detailed:2 clear:3 dark:1 diameter:3 nsf:2 notice:2 estimated:1 write:2 key:6 putting:1 achieving:3 d3:2 verified:1 lacoste:1 v1:2 button:1 subgradient:4 year:1 sum:4 run:1 powerful:3 soda:1 fbt:22 throughout:1 draw:1 appendix:3 x2k:4 bound:27 pay:1 koren:2 modularize:1 g:1 yielded:1 precisely:1 kleinberg:2 optimality:1 min:1 selfconcordant:1 subgradients:2 relatively:1 according:2 ball:1 kd:2 remain:1 across:1 newer:1 kakade:1 computationally:6 alluded:1 remains:2 r3:1 needed:1 tractable:1 away:1 generic:1 schmidt:2 existence:1 original:6 remaining:1 ensure:1 exploit:1 approximating:4 society:1 dht:1 objective:3 question:1 quantity:2 strategy:2 surrogate:5 gradient:35 street:2 restart:2 eceive:1 assuming:1 length:1 ellipsoid:4 minimizing:1 setup:2 difficult:1 executed:1 relate:1 negative:1 stated:2 affirms:1 design:1 upper:3 finite:2 descent:2 peres:1 nemirovskii:3 smoothed:15 arbitrary:4 introduced:2 unpublished:1 extensive:1 barcelona:1 nip:4 adversary:4 proceeds:1 scott:1 beating:1 regime:1 rf:1 including:1 kykx:1 shifting:1 regularized:3 predicting:7 scheme:1 improve:2 kxk22:1 cim:2 julien:1 flaxman:7 occurence:1 literature:1 acknowledgement:1 k2xt:3 loss:55 expect:1 highlight:2 lecture:1 proven:1 remarkable:1 incurred:2 mercer:2 xiao:1 cd:5 translation:1 course:1 mohri:2 eldan:4 svrg:1 bias:1 allow:1 institute:2 template:1 barrier:14 feedback:6 dimension:7 boundary:1 cumulative:2 fb:2 made:1 adaptive:1 projected:1 coincide:1 historical:1 far:2 compact:1 hayes:1 b1:11 kxka:1 leader:3 why:1 additionally:1 contributes:1 improving:1 mehryar:1 necessarily:1 domain:2 aistats:1 pk:5 main:3 x1:2 augmented:2 ny:3 explicit:1 saga:2 exponential:1 lie:1 kxk2:1 candidate:1 breaking:1 mcmahan:1 theorem:7 down:1 xt:48 nyu:2 rakhlin:9 admits:6 r2:3 cd2:4 incorporating:1 essential:1 indiscriminately:1 dl:2 sequential:2 effectively:1 gained:1 corr:5 kx:2 lt:11 p1t:1 bubeck:8 contained:1 springer:1 determines:1 careful:1 lipschitz:26 price:1 feasible:4 change:1 determined:1 specifically:3 uniformly:1 except:1 averaging:1 lemma:20 called:1 partly:1 concordant:13 player:1 formally:1 support:1 latter:1 d1:1 |
6,002 | 643 | Extended Regularization Methods for
N onconvergent Model Selection
W. Finnoff, F. Hergert and H.G. Zimmermann
Siemens AG, Corporate Research and Development
Otto-Hahn-Ring 6
8000 Munich 83, Fed. Rep. Germany
Abstract
Many techniques for model selection in the field of neural networks
correspond to well established statistical methods. The method
of 'stopped training', on the other hand, in which an oversized
network is trained until the error on a further validation set of examples deteriorates, then training is stopped, is a true innovation,
since model selection doesn't require convergence of the training
process.
In this paper we show that this performance can be significantly
enhanced by extending the 'non convergent model selection method'
of stopped training to include dynamic topology modifications
(dynamic weight pruning) and modified complexity penalty term
methods in which the weighting of the penalty term is adjusted
during the training process.
1
INTRODUCTION
One of the central topics in the field of neural networks is that of model selection.
Both the theoretical and practical side of this have been intensively investigated and
a vast array of methods have been suggested to perform this task. A widely used
class of techniques starts by choosing an 'oversized' network architecture then either
removing redundant elements based on some measure of saliency (pruning), adding a
further term to the cost function penalizing complexity (penalty terms), and finally,
observing the error on a further validation set of examples, then stopping training
as soon as this performance begins to deteriorate (stopped training). The first
two methods can be viewed as variations of long established statistical techniques
228
Extended Regularization Methods for N onconvergent Model Selection
corresponding in the case of pruning to specification searches, and with respect to
penalty terms as regularization or biased regression.
The method of stopped training, on the other hand, seems to be one of the true
innovations to come out of neural network research. Here, the model chosen doesn't
require the training process to converge, rather, the training process is used to perform a directed search of weight space to find a model with superior generalization
performance. Recent theoretical ([B,C,91], [F,91], [F,Z,91]) and empirical results
([H,F ,Z,92], [W,R,H,90]) have provided strong evidence for the efficiency of stopped
training. In this paper we will show that generalization performance can be further enhanced by expanding the 'nonconvergent method' of stopped training to
include dynamic topology modifications (dynamic pruning) and modified complexity penalty term methods in which the weighting of the penalty term is adjusted
during the training process. Here, the empirical results are based on an extensive sequence of simulation examples designed to reduce the effects of domain dependence
on the performance comparisons.
2
CLASSICAL MODEL SELECTION
Classical model selection methods are generally divided into a number of steps
that are performed independently. The first step consists of choosing a network
architecture, then either an objective function (possibly including a penalty term)
is chosen directly, or in a Bayesian setting, prior distributions on the elements of
the data generating process (noise, weights in the model, regularizers, etc.) are
specified from which an objective function is derived. Next, using the specified
objective function, the training process is started and continued until a convergence
criterion is fulfilled. The resulting parametrization of the given architecture is then
placed in a 'pool' from which a final model will be selected.
The next step can consist of a modification of the network architecture (for example by pruning weights/hidden-neurons/input-neurons), or of the penalty term (for
example by changing its weighting in' the objective function) or of the Bayesian
prior distributions. The last two modifications then result in a modification of
the objective function. This establishes a new framework for the training process
which is then restarted and continued until convergence, producing another model
for the pool. This process is iterated until the model builder is satisfied that the
pool contains a reasonable diversity of candidate models, which are then compared
with one another using some estimator of generalization ability, (for example, the
performance on a validation set).
Stopped training, on the other hand, has a fundamentally different character. Although the choice of framework remains the same, the essential innovation consists
of considering every parametrization of a given architecture as a potential model.
This contrasts with classical methods in which only those parametrizations corresponding to minima of the objective function are taken into consideration for the
model pool.
Under the weight of accumulated empirical evidence (see [W,R,H,90], [H,F,Z,92])
theorists have begun to investigate the properties of this technique and have been
able to show that stopped training has the same sort of regularization effect (Le.
reduction of model variance at the cost of bias) that penalty terms provide (see
229
230
FinnoH, Hergert, and Zimmermann
[B,C,91], [F,91]). Since the basic effect of pruning procedures is also to reduce
network complexity (and consequent model variance) one sees that there is a close
relationship in the instrumental effects of stopped training, pruning and regularization. The question remains whether (or under what circumstances) anyone or
combination of these methods produces superior results.
3
THE METHODS TESTED
In our expirements a single hidden layer feedforward network with tanh activation
functions and ten hidden units was used to fit data sets generated in such a manner
that network complexity had to be reduced or constrained to prevent overfitting. A
variety of both classical and non convergent methods were tested for this purpose.
The first we will discuss used weight pruning. To characterize the relevance of a
weight in a given network, three different test variables were used. The first simply
measures weight size under the assumption that the training process naturally forces
nonrelevant weights into a region around zero. The second test variable is that used
in the Optimal Brain Damage (OBD) pruning procedure of Le Cun et al. (see
[L,D,S,90]). The final test variables considered are those proposed by Finnoff and
Zimmermann in [F,Z,91], based on significance tests for deviations from zero in the
weight update process.
Two pruning algorithms were used in the experiments, both of which attempt to
emulate successful interactive methods. In the first algorithm, one removes a certain fixed percentage of weights in the network after a stopping criterion is reached.
The reduced network is then trained further until the stopping criterion is once
again fulfilled. This process is then repeated until performance breaks down completely. This method will be referred to in the following as auto-pruning and was
implement.ed using all three types of test variables to determine the weights to be
removed. The only difference lay in the stopping criterion used. In the case of
the OBD test variables, training was stopped after the training process converged.
In the case of the statistical and small weight test variables, training was stopped
whenever overtraining (defined by a repeated increase in the error on a validation
set) was observed. A final (restart) variant of auto-pruning using the statistical
test variables was also tested. This version of auto-pruning only differs in that the
weights are reinitialized (on the reduced topology) after every pruning step. In
the tables of results presented in the appendix, the results for auto-pruning using
the statistical (resp . small weight, resp. OBD) test variables will be denoted by p.
(resp. G*, resp. 0*). The version of auto-pruning using restarts will be denoted by
p ?.
The second method uses the statistical test variables to both remove and reactivate
weights. As in auto-pruning the network is trained until overfitting is observed after
a fixed number of epochs, then test values are calculated for all active and inactive
weights. Here a fixed number ? > 0 is given, corresponding to some quantile value
of a probability distribution. If the test variable for an active weight falls below
? the weight is pruned (deactivated). For weights that have already been set to
zero, the value of the test variables are compared with ?, and if larger, the weight is
reactivated with a small random value. Furthermore, the value of ? is increased by
some ~? > 0 after each pruning step until some value ?ma~ is reached. This method
is referred to as epsi-pruning. Epsi-pruning was tested in versions both with (e?)
Extended Regularization Methods for N onconvergent Model Selection
and without restarts (E*).
Two complexity penalty terms were considered. These consist of a further term
C-\(w) added to the error function which forces the network to achieve a compromise between fit and network complexity during the training process; here, the
parameter A E [0,00) controls the strength of the complexity penalty. The first is
the quadratic term, the first derivative of which leads to the so-called weight decay
term in the weight updates (see [H,P,S9]). The second is the Weigend/Rumelhart
penalty term (see [W,R,H,91]). The weight decay penalty term was tested using
two techniques. [n the first of these, (D*), A was held constant throughout the
training process. In the second, (d*), ..\ was set to zero until overtraining was observed, then turned on and held constant for the remainder of the training process.
The Weigend/Rumelhart penalty term was also tested using these two methods
(denoted in the following tables by W?, resp. w*). Further, the algorithm suggested
by A. Weigend in [W,R,H,91] in which the value of A is varied during training was
considered (wF).
In addition to the pruning and penalty term methods investigated, two (simple)
versions of stopped training were tested, in one case (nN) with a constant learning
step throughout, and in the other (nF) with the step size reduced after overtra.ining
was observed. Finally three benchmarks were included. All these involved training
a network until convergence to emulate the situation when no precautions are taken
to prevent overfitting other than varying the number of hidden units. The number
of hidden units in these benchmark tests was set at three, six and ten, (#3, #6,
##) this last network having then the same topology as that used in the remaining
tests.
4
THE DATA GENERATION PROCESSES
To test the methods under consideration, a number of processes were used to
generate data sets. By testing on a sufficiently wide range of controlled examples one hopes to reduce the domain dependence that might arise in the performance comparisons. The data used in our experiments was based on pairs (Yi, z~,
i = 1, ... , T, TEN with targets 'iii E R and inputs Zi = (xt, ... , zf) E [-1,1] ,
where Yi = g(zt, ... , xi) + Ui, for j, KEN. Here, 9 represents the structure in the
data, xl, ... , z1 the relevant inputs, x J +1 , ... , zK, the irrelevant or decoy inputs and
Ui a stochastic disturbance term.
The first group of experiments was based on an additive structure 9 having the
5 and K
10, g(xf, ... , z~)
E:=l l(aA: z1), aA: E R
following form with j
and I either the identity on R or sin. The second class of models investigated had
a highly nonlinear product structure 9 with j
3, K
10 and g(zt, ... , zr)
n:=l I( ale x1), ale E R and I once again either the identity on R or sin. The
next structure considered was constructed using sums of Radial Basis Functions
zo 2 )
1 ... , zi5) -- ,,8
? h
(RB F ')
s as fo 11ows, 9 ( zi'
L,.../=l (1)'
exp (,,5
L,...A:=l (a'""2q2
,Wlt
a A:,I E
R for k 1, ... ,5, I 1, ... ,8. Here, for every 1= 1, ... ,8 the vector parameter
(a l ", ... , a lS ,,) corresponds to the center of the RBF. The final group of experiments
were conducted using data generated by feedforward network activation functions.
The network used for this task had fifty input units, two hundred hidden units and
=
=
=
=
=
=
n
=
=
231
232
Finnoff, Hergert, and Zimmermann
one output. In every experiment, the data was divided into three disjoint subsets
1)t,1)",1)g: The first set 1), was used for training, the second 1)" (validation)
set to test for overfitting and to steer the pruning algorithms and the third 1),
(generalization) set to test the quality of the model selection process.
5
DISCUSSION
The results of the experiments are given
most interesting phanomena observed.
~elow.
Here we give a short review of the
Notable in a general sense is a striking domain dependence in the performance,
which illustrates the danger of basing a comparison of methods on tests using a
single (particularly small) data set. Another valuable observation is that by testing
at higher levels of significance, apparent performance differences can dwindle or even
disappear. Finally, one sees that even in the examples without noise that overfitting
occurs, which contradicts the frequently stated conviction that overfitting is noise
fitting.
With regard to specific methods, one sees that all the methods tested significantly
improved generalization performance when compared to the benchmarks. Further,
the results show that the extended non convergent methods are on average superior
(sometimes dramatically so) than their classical counterparts. In particular, the
performance of penalty terms is greatly enhanced if they are first introduced in the
training process after overtraining is observed. Further, dynamic pruning using the
statistical or even the small weight test variables produces significantly better results
than stopped training alone or using the Optimal Brain Damage (OBD) weight
elimination method which requires training to minima of the objective function. A
final notable observation is that the pruning methods (especially those using resarts)
generally work better in the examples with a great deal of noise, while the penalty
term methods are superior when the structure is highly nonlinear.
6
TABLES OF RESULTS
The experiments were performed as follows: First, each data generating process
was used to produce six independent sets of data and initial weights to increase the
statistical significance of observed effects and to help reduce the effects of any data
set specific anomalies. In a second step, the parameters of the training processes
were optimized for each example by extensive testing, then a fixed value for each
parameter was chosen for use across the entire range of experiments. With these
parameters, each method was tested on all of the six data sets produced by one data
generating process. Both the penalty terms and the pruning methods were tested
with different settings of the relevant parameters in each model. The parameter
values used in the simulations and an overview of the methods tested are collected
in the following two tables.
Extended Regularization Methods for Nonconvergent Model Selection
6.1
Parameter Settings of the Experiments
lze
exp__n
exp_3-n
exp_6-n
id_7..n
id_B_n
id_9_n
n_Ojd
n_Lid
n_2jd
n_O-Bin
n_1-Bin
n_2-Bin
net_ -n
net_3..n
net_6..n
sin_O-n
sin_3_n
sin_6_n
6.2
Vt!V,,/V
0.3
0.6
0.7
O.B
0.9
0.0
0.1
0.2
0.0
0.3
0.6
0.0
0.3
0.6
Overview of Methods Tested
4
o 10
400/200/1000
400/200/1000
200/100/1000
200/100/1000
200/100/1000
1400/600/1000
1400/600/1000
1400/600/1000
1400/600/1000
1400/600/1000
1400/600/1000
0
400/200/1000
400/200/1000
400/200/1000
400/200/1000
400 200/1000
earn tep
before/after overfitting
. 5 O. 5
0.05/0.005
0.05/0.005
0.05/0.005
0.05/0.005
0.05/0.005
0.05/0.01
0.05/0.01
0.05/0.01
0.05/0.01
0.05/0.01
0.05/0.01
0.05/0.005
0.05/0.005
0.05/0.005
0.05/0.005
0.05/0.005
233
234
Finnoff, Hergert, and Zimmermann
The following tables give categorical rankings of the results. The rankings were
calculated as follows: The method with the best performance was given ranking
I, then the performance of each following method was compared with that of the
method on the first position using a modified t-test statistic. The first method in
the list whose test results deviated from that on the first position to at least the
quantile value of the statistic given at the head of the table was then used to start
the second category. All those whose test results did not deviate by at least this
amount were given the same ranking as the leading method of the category, (in this
case 1). Following categories were then formed in an analogous fashion using test
results measured against the performance of the leading method at the head of the
category.
The results are presented in two tables. The first contains the results for the data
generating processes without noise and the second for the models with noise. The
categorical rankings given were determined using the procedure described above at
a 0.9 level of significance. The ordering of the methods given, listed in the first
column, is based on the average ranking over all the simulations listed in the table.
This average is given in the second column.
6.2.1
Data Generating Processes without Noise
Classification by objective function, ta = 0.9
method
d*
P'"
w'"
wf'
G*
t;'"
0'"
p*
nF
e'"
nN
##
W*
D*
6#
3#
av
1.6
1.8
2.0
2.0
2.2
2.6
2.6
2.6
3.0
3.8
3.8
5.2
5.6
5.8
6.2
6.4
exp_O_n
1
2
1
2
2
2
3
4
3
4
4
5
8
8
6
7
n_O.Jd
3
2
2
1
3
5
4
5
5
6
7
8
10
11
9
12
n_O_sm
1
2
3
3
3
3
2
2
3
4
4
4
7
7
6
5
neLO_n
2
1
3
2
1
1
2
1
2
3
1
3
1
2
5
4
slD_O_n
1
2
1
2
2
2
2
1
2
2
3
6
2
1
5
4
Extended Regularization Methods for Nonconvergent Model Selection
6.2.2
Data Generating Processes with Noise
Classification by objective function, ta
method
P'"
d'"
p'"
wI"
e'"
~'"
<1'"
0'"
w'"
nF
nN
ll'"
W'"
##
3#
6#
7
av
2.1
2.2
2.2
2.2
2.2
2.6
2.7
2.8
2.8
2.9
3.5
3.7
4.1
5.2
5.3
5.4
exp
_3_n
3
5
2
4
1
3
4
5
5
5
5
5
5
6
7
8
exp
_6_n
1
5
1
5
2
3
4
5
5
5
5
4
4
6
7
8
id
_9_n
4
3
1
3
1
2
3
3
3
3
4
5
5
6
7
8
= 0.9
n_l
Jd
2
1
3
1
4
4
3
3
3
3
4
1
5
6
7
5
n_l
..sin
2
1
5
4
5
4
3
5
5
5
5
5
6
5
5
4
n_2
Jd
1
2
3
1
3
3
3
4
3
3
3
7
7
5
'l
6
n_2
-Bin
3
1
2
2
4
3
3
1
3
3
3
5
5
5
6
3
net
_3_n
3
1
1
2
2
2
2
1
1
1
3
2
2
5
4
4
net
_6_n
1
1
2
1
2
1
1
1
1
1
1
2
2
4
3
4
sm
sm
_3_n
2
2
1
2
1
2
2
_6_n
2
2
2
3
1
1
5
4
5
2
2
1
1
1
2
2
1
2
2
3
1
1
5
4
5
REFERENCES
[B,C,91] Baldi, P. and Chauvin, Y., Temporal evolution of generalization during
learning in linear networks, Neural Computation 3, 1991, pp. 589-603.
[F,91] Finnoff, W., Complexity measures for classes of neural networks with variable
weight bounds, in Proc. Int. J{)int Conf. on Neural Networks, Singapore,
1991.
[F,Z,91] Fi'nnofi', W., Zimmermann, H.G., Detecting structure in small datasets by
network fitting under complexity constraints, to appear in Proc. of 2nd Ann.
Workshop Computational Learning Theory and Natural Learning Systems,
Berkeley, 1991.
[H,P,89], Hanson, S. J., and Pratt, L. Y., Comparing biases for minimal network
construction with back-propagation, in Advances in Neural Information Processing I, D. S. Touretzky, Ed., Morgan Kaufman, 1989.
[H,F,Z,92] Hergert, F., Finnofi', W. and H.G. Zimmermann, A comparison of weight
elimination methods for reducing complexity in neural networks. To be presented at Int. Joint Con/. on Neural Networks, Baltimore, 1992.
[L,D,S,90] Le Cun, Y., Denker J. and Solla, S., Optimal Brain Damage, in Proceedings of Neural Information Processing Systems II, Denver, 1990.
[W,R,H,91] Weigend, A., Rumelhart, D., and Huberman, B., Generalization by
weight elimination with application to forecasting, Advances in Neural Information Processing III, Ed. R. P. Lippman and J. Moody, Morgan Kaufman,
1991.
235
| 643 |@word version:4 seems:1 instrumental:1 nd:1 simulation:3 reduction:1 initial:1 contains:2 comparing:1 activation:2 additive:1 remove:2 designed:1 update:2 precaution:1 alone:1 selected:1 n_o:2 parametrization:2 short:1 detecting:1 constructed:1 consists:2 fitting:2 baldi:1 manner:1 deteriorate:1 frequently:1 brain:3 considering:1 begin:1 provided:1 what:1 kaufman:2 q2:1 ag:1 temporal:1 berkeley:1 every:4 nf:3 interactive:1 control:1 unit:5 appear:1 producing:1 before:1 id:1 nonrelevant:1 might:1 range:2 directed:1 practical:1 testing:3 implement:1 differs:1 lippman:1 procedure:3 danger:1 empirical:3 significantly:3 radial:1 close:1 selection:12 s9:1 center:1 independently:1 l:1 estimator:1 continued:2 array:1 variation:1 analogous:1 resp:5 enhanced:3 target:1 construction:1 anomaly:1 us:1 element:2 rumelhart:3 particularly:1 lay:1 observed:7 region:1 ordering:1 solla:1 removed:1 valuable:1 complexity:11 ui:2 dynamic:5 trained:3 compromise:1 efficiency:1 completely:1 basis:1 joint:1 emulate:2 zo:1 choosing:2 apparent:1 whose:2 widely:1 larger:1 otto:1 ability:1 statistic:2 final:5 sequence:1 oversized:2 net:2 product:1 remainder:1 turned:1 relevant:2 parametrizations:1 achieve:1 convergence:4 reinitialized:1 extending:1 produce:3 generating:6 ring:1 hergert:5 help:1 measured:1 strong:1 come:1 stochastic:1 elimination:3 bin:4 require:2 generalization:7 adjusted:2 around:1 considered:4 sufficiently:1 exp:3 great:1 purpose:1 proc:2 tanh:1 basing:1 builder:1 establishes:1 hope:1 modified:3 rather:1 varying:1 derived:1 greatly:1 contrast:1 wf:2 sense:1 stopping:4 accumulated:1 nn:3 entire:1 hidden:6 germany:1 reactivate:1 classification:2 denoted:3 development:1 constrained:1 field:2 once:2 having:2 represents:1 wlt:1 fundamentally:1 attempt:1 investigate:1 highly:2 regularizers:1 held:2 theoretical:2 minimal:1 stopped:14 increased:1 column:2 steer:1 cost:2 deviation:1 subset:1 hundred:1 successful:1 conducted:1 characterize:1 pool:4 moody:1 elow:1 earn:1 again:2 central:1 satisfied:1 possibly:1 conf:1 derivative:1 leading:2 potential:1 diversity:1 int:3 notable:2 ranking:6 performed:2 break:1 observing:1 reached:2 start:2 sort:1 formed:1 variance:2 correspond:1 saliency:1 bayesian:2 iterated:1 produced:1 converged:1 overtraining:3 fo:1 touretzky:1 whenever:1 ed:3 against:1 pp:1 involved:1 naturally:1 con:1 finnoff:5 begun:1 intensively:1 back:1 higher:1 ta:2 restarts:2 improved:1 furthermore:1 until:10 hand:3 nonlinear:2 propagation:1 quality:1 effect:6 true:2 counterpart:1 evolution:1 regularization:8 deal:1 sin:3 during:5 ll:1 criterion:4 tep:1 consideration:2 fi:1 superior:4 denver:1 overview:2 theorist:1 had:3 specification:1 etc:1 recent:1 irrelevant:1 certain:1 rep:1 vt:1 yi:2 morgan:2 minimum:2 converge:1 determine:1 redundant:1 ale:2 ii:1 corporate:1 xf:1 reactivated:1 long:1 divided:2 controlled:1 variant:1 regression:1 basic:1 circumstance:1 sometimes:1 addition:1 baltimore:1 biased:1 fifty:1 feedforward:2 iii:2 pratt:1 variety:1 fit:2 zi:2 architecture:5 topology:4 reduce:4 inactive:1 whether:1 six:3 forecasting:1 penalty:18 dramatically:1 generally:2 listed:2 amount:1 ten:3 ken:1 category:4 reduced:4 generate:1 percentage:1 singapore:1 deteriorates:1 fulfilled:2 disjoint:1 rb:1 group:2 changing:1 prevent:2 penalizing:1 vast:1 sum:1 weigend:4 striking:1 throughout:2 reasonable:1 appendix:1 layer:1 bound:1 convergent:3 deviated:1 quadratic:1 strength:1 constraint:1 ining:1 anyone:1 pruned:1 munich:1 combination:1 across:1 character:1 contradicts:1 wi:1 cun:2 modification:5 zimmermann:7 taken:2 remains:2 discus:1 fed:1 denker:1 jd:3 remaining:1 include:2 n_l:2 quantile:2 hahn:1 especially:1 disappear:1 classical:5 objective:9 question:1 already:1 added:1 occurs:1 damage:3 dependence:3 restart:1 topic:1 collected:1 chauvin:1 relationship:1 decoy:1 innovation:3 stated:1 zt:2 perform:2 zf:1 av:2 neuron:2 observation:2 datasets:1 sm:2 benchmark:3 situation:1 extended:6 head:2 varied:1 introduced:1 pair:1 specified:2 extensive:2 z1:2 optimized:1 hanson:1 established:2 able:1 suggested:2 below:1 including:1 deactivated:1 natural:1 force:2 disturbance:1 zr:1 started:1 categorical:2 auto:6 deviate:1 prior:2 epoch:1 review:1 ows:1 generation:1 interesting:1 validation:5 obd:4 placed:1 last:2 soon:1 side:1 bias:2 fall:1 wide:1 regard:1 calculated:2 doesn:2 pruning:25 overfitting:7 active:2 xi:1 search:2 table:8 zk:1 expanding:1 investigated:3 domain:3 did:1 significance:4 noise:8 arise:1 repeated:2 x1:1 referred:2 fashion:1 position:2 xl:1 candidate:1 weighting:3 third:1 removing:1 down:1 xt:1 nonconvergent:3 specific:2 list:1 decay:2 consequent:1 evidence:2 consist:2 essential:1 workshop:1 adding:1 conviction:1 illustrates:1 simply:1 restarted:1 aa:2 corresponds:1 ma:1 viewed:1 identity:2 ann:1 rbf:1 included:1 determined:1 reducing:1 huberman:1 called:1 siemens:1 relevance:1 tested:12 |
6,003 | 6,430 | Linear dynamical neural population models through
nonlinear embeddings
Yuanjun Gao? 1 , Evan Archer?12 , Liam Paninski12 , John P. Cunningham12
Department of Statistics1 and Grossman Center2
Columbia University
New York, NY, United States
yg2312@columbia.edu, evan@stat.columbia.edu,
liam@stat.columbia.edu, jpc2181@columbia.edu
Abstract
A body of recent work in modeling neural activity focuses on recovering lowdimensional latent features that capture the statistical structure of large-scale neural
populations. Most such approaches have focused on linear generative models,
where inference is computationally tractable. Here, we propose fLDS, a general
class of nonlinear generative models that permits the firing rate of each neuron
to vary as an arbitrary smooth function of a latent, linear dynamical state. This
extra flexibility allows the model to capture a richer set of neural variability than
a purely linear model, but retains an easily visualizable low-dimensional latent
space. To fit this class of non-conjugate models we propose a variational inference
scheme, along with a novel approximate posterior capable of capturing rich temporal correlations across time. We show that our techniques permit inference in a
wide class of generative models.We also show in application to two neural datasets
that, compared to state-of-the-art neural population models, fLDS captures a much
larger proportion of neural variability with a small number of latent dimensions,
providing superior predictive performance and interpretability.
1
Introduction
Until recently, neural data analysis techniques focused primarily upon the analysis of single neurons
and small populations. However, new experimental techniques enable the simultaneous recording
of ever-larger neural populations (at present, hundreds to tens of thousands of neurons). Access to
these high-dimensional data has spurred a search for new statistical methods. One recent approach
has focused on extracting latent, low-dimensional dynamical trajectories that describe the activity
of an entire population [1, 2, 3]. The resulting models and techniques permit tractable analysis and
visualization of high-dimensional neural data. Further, applications to motor cortex [4] and visual
cortex [5, 6] suggest that the latent trajectories recovered by these methods can provide insight into
underlying neural computations.
Previous work for inferring latent trajectories has considered models with a latent linear dynamics
that couple with observations either linearly, or through a restricted nonlinearity [1, 3, 7]. When
the true data generating process is nonlinear (for example, when neurons respond nonlinearly to
a common, low-dimensional unobserved stimulus), the observation may lie in a low-dimensional
nonlinear subspace that can not be captured using a mismatched observation model, hampering
the ability of latent linear models to recover the low-dimensional structure from the data. Here,
we propose fLDS, a new approach to inferring latent neural trajectories that generalizes several
previously proposed methods. As in previous methods, we model a latent dynamical state with a
?
These authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
linear dynamical system (LDS) prior. But, under our model, each neuron?s spike rate is permitted to
vary as an arbitrary smooth nonlinear function of the latent state. By permitting each cell to express
its own, private non-linear response properties, our approach seeks to find a nonlinear embedding of
a neural time series into a linear-dynamical state space.
To perform inference in this nonlinear model we adapt recent advances in variational inference
[8, 9, 10]. Using a novel approximate posterior that is capable of capturing rich correlation structure
in time, our techniques can be applied to a large class of latent-LDS models. We show that our
variational inference approach, when applied to learn generative models that predominate in the neural
data analysis literature, performs comparably to inference techniques designed for a specific model.
More interestingly, we show in both simulation and application to two neural datasets that our fLDS
modeling framework yields higher prediction performance with a more compact and informative
latent representation, as compared to state-of-the-art neural population models.
2
Notation and overview of neural data
Neuronal signals take the form of temporally fast (? 1 ms) spikes that are typically modeled as
discrete events. Although the spiking response of individual neurons has been the focus of intense
research, modern experimental techniques make it possible to study the simultaneous activity of large
numbers of neurons. In real data analysis, we usually discretize time into small bins of duration t
and represent the response of a population of n neurons at time t by a vector xt of length n, whose
ith entry represents number of spikes recorded from neuron i in time bin t, where i 2 {1, . . . , n},
t 2 {1, . . . , T }. Additionally, because spike responses are variable even under identical experimental
conditions, it is commonplace to record many repeated trials, r 2 {1, . . . , R}, of the same experiment.
Here, we denote xrt = (xrt1 , ..., xrtn )> 2 Nn as spike counts of n neurons for time t and trial r.
When the time index is suppressed, we refer to a data matrix xr = (xr1 , ..., xrT ) 2 NT ?n . We also
use x = (x1 , ..., xR ) 2 NT ?n?R to denote all the observations. We use analogous notation for other
temporal variables; for instance zr and z.
3
Review of latent LDS neural population models
Latent factor models are popular tools in neural data analysis, where they are used to infer lowdimensional, time-evolving latent trajectories (or factors) zrt 2 Rm , m ? n that capture a large
proportion of the variability present in a neural population recording. Many recent techniques follow
this general approach, with distinct noise models [3], different priors on the latent factors [11, 12],
extra model structure [13] and so on.
We focus upon one thread of this literature that takes its inspiration directly from the classical
Kalman filter. Under this approach, the dynamics of a population of n neurons are modulated by
an unobserved, linear dynamical system (LDS) with an m-dimensional latent state zrt that evolves
according to,
zr1 ? N (?1 , Q1 )
zr(t+1) |zrt ? N (Azrt , Q),
(1)
(2)
where A is an m ? m linear dynamics matrix, and the matrices Q1 and Q are the covariances of the
initial states and Gaussian innovation noise, respectively. The spike count observation is then related
to the latent state via an observation model,
xrti |zrt ? P (
rti
= [f (zrt )]i ) .
(3)
where [f (zrt )]i is the ith element of a deterministic ?rate? function f (zrt ) : Rm ! Rn , and P ( )
is a noise model with parameter . Each choice among the ingredients f and P leads to a model
with distinct characteristics. When P is a Gaussian distribution with mean parameter and linear
rate function f , the model reduces to the classical Kalman filter. All operations in the Kalman filter
are conjugate, and inference may be performed in closed form. However, any non-Gaussian noise
model P or nonlinear rate function f breaks conjugacy and necessitates the use of approximate
inference techniques. This is generally the case for neural models, where the discrete, positive nature
of spikes suggests the use of discrete noise models with positive link[1, 3].
2
Examples of latent LDS models for neural populations: Existing LDS models usually impose
strong assumptions on the rate function. When P is chosen to be Poisson with f (zrt ) to be the
(element-wise) exponential of a linear transformation of zrt , we recover the Poisson linear dynamical
system model (PLDS)[1],
xrti |zrt ? Poisson (
rti
xrti |zt ? GC (
rti
= exp(ci zrt + di )) ,
(4)
where ci is the ith row of the n ? m observation matrix C and di 2 R is the baseline firing rate of
neuron i. With P chosen to be a generalized count (GC) distribution and linear rate f , the model is
called the generalized count linear dynamical system (GCLDS) [3],
= ci zrt , gi (?)) .
(5)
where GC( , g(?)) is a distribution family parameterized by 2 R and a function g(?) : N ! R,
distributed as,
exp( k + g(k))
pGC (k; , g(?)) =
. k2N
(6)
k!M ( , g(?))
P1
where M ( , g(?)) = k=0 exp( k+g(k))
is the normalizing constant. The GC model can flexibly
k!
capture under- and over-dispersed count distributions.
4
Nonlinear latent variable models for neural populations
4.1
Generative Model: Linear dynamical system with nonlinear observation
We relax the linear assumptions of the previous LDS-based neural population models by incorporating
a per-neuron rate function. We retain the latent LDS of eq. 1 and eq. 2, but select an observation
model such that each neuron has a separate nonlinear dependence upon the latent variable,
xrti |zrt ? P (
rti
= [f (zrt )]i ) ,
(7)
where P ( ) is a noise model with parameter ; f : R ! R is an arbitrary continuous function
from the latent state into the spike rate; and [f (zrt )]i is the ith element of f (zrt ). In principle,
the rate functions may be represented using any technique for function approximation. Here, we
represent f (?) through a feed-forward neural network model. The parameters then amount to the
weights and biases of all units across all layers. For the remainder of the text, we use ? to denote all
generative model parameters: ? = (?1 , Q1 , A, Q, ). We refer to this class of models as fLDS.
m
n
To refer to an fLDS with a given noise model P , we prepend the noise model to the acronym. In the
experiments, we will consider both PfLDS (taking P to be Poisson) and GCfLDS (taking P to be a
generalized count distribution).
4.2
Model Fitting: Auto-encoding variational Bayes (AEVB)
Our goal is to learn the model parameters ? and to infer the posterior distribution over the latent variables z. Ideally, we would performRmaximum likelihood estimation on the parameters,
PR
?? = arg max? log p? (x) = arg max? r=1 p? (xr , zr )dzr , and compute the posterior p??(z|x).
However, under a fLDS neither the p? (z|x) nor p? (x) are computationally tractable (both due to
the noise model P and the nonlinear observation model f (?)). As a result, we pursue a stochastic
variational inference approach to simultaneously learn parameters ? and infer the distribution of z.
The strategy of variational inference is to approximate the intractable posterior distribution p? (z|x) by
a tractable distribution q (z|x), which carries its own parameters .2 With an approximate posterior3
in hand, we learn both p? (z, x) and q (z|x) simultanously by maximizing the evidence lower bound
(ELBO) of the marginal log likelihood:
?
R
R
X
X
p? (xr , zr )
log p? (x) L(?, ; x) =
L(?, ; xr ) =
Eq (zr |xr ) log
.
(8)
q (zr |xr )
r=1
r=1
2
Here, we consider a posterior q (z|x) that is conditioned explicitly upon x. However, this is not necessary
for variational inference.
3
The approximate posterior is also sometimes called a ?recognition model?.
3
We optimize L(?, ; x) by stochastic gradient ascent, using a Monte Carlo estimate of the gradient
rL. It is well-documented that Monte Carlo estimates of rL are typically of very high variance,
and strategies for variance reduction are an active area of research [14, 15].
Here, we take an auto-encoding variational Bayes (AEVB) approach [8, 9, 10] to estimate rL. In
AEVB, we choose an easy-to-sample random variable ? ? p(?) and sample z through a transformation
of random sample ? parameterized by observations x and parameters : z = h (x, ?) to get a rich
set of variational distributions q (z|x). We then use the unbiased gradient estimator on minibatches
consisting of a randomly selected single trials xr ,
rL(?, ; x) ? RrL(?, ; xr )
"
L
1X
?R
r log p? (xr , h (xr , ?l ))
L
l=1
rEq
(zr |xr )
#
[log q (zr |xr )] ,
(9)
(10)
where ?l are iid samples from p(?). In practice, we evaluate the gradient in eq. 9 using a single sample
from p(?) (L = 1) and use ADADELTA for stochastic optimization [16].
Choice of approximate posterior q (z|x): The AEVB approach to inference is appealing in
its generality: it is well-defined for a large class of generative models p? (x, z) and approximate
posteriors q (z|x). In practice, however, the performance of the algorithm has a strong dependence
upon the particular structure of these models. In our case, we use an approximate posterior that is
designed explicitly to parameterize a temporally correlated approximate posterior [17]. We use a
Gaussian approximate posterior,
q (zr |xr ) = N (? (xr ), ? (xr )) ,
(11)
where ? (xr ) is a mT ? 1 mean vector and ? (xr ) is a mT ? mT covariance matrix. Both ? (xr )
and ? (xr ) are parameterized by observations x through a structured neural network, as described
in detail in supplementary material. We can sample from this approximate by setting p(?) ? N (0, I)
1/2
1/2
and h (?; x) = ? (x) + ? (xr )? , where ?
is the Cholesky decomposition of ? .
This approach is similar to that of [8], except that we impose a block-tridiagonal structure upon
the precision matrix ? 1 (rather than a diagonal covariance), which can express rich temporal
correlations across time (essential for the posterior to capture the smooth, correlated trajectories
typical of LDS posteriors), while remaining tractable with a computational complexity that scales
linearly with T , the length of a trial.
5
5.1
Experiments
Simulation experiments
Linear dynamical system models with shared, fixed rate function: Our AEVB approach in
principle permits inference in any latent LDS model. To illustrate this flexibility, we simulate
3 datasets from previously-proposed models of neural responses. In our simulations, each datagenerating model has a latent LDS state of m = 2 dimensions, as described by eq. 1 and eq. 2. In all
data-generating models, spike rates depend on the latent state variable through a fixed link function f
that is common across neurons. Each data-generating model has a distinct observation model (eq. 3):
Bernoulli (logistic link), Poisson (exponential link), or negative-binomial (exponential link).
We compare PLDS and GCLDS model fits to each datasets, using both our AEVB algorithm and two
EM-based inference algorithms: LapEM (which approximates p(z|x) with a multivariate Gaussian
by Laplace approximation in the E-step [1, 3]) and VBDual (which approximates p(z|x) with a
multivariate Gaussian by variational inference, through optimization in the dual space [18, 3]).
Additionally, we fit PfLDS and GCfLDS models with the AEVB algorithm. On this linear simulated
data we do not expect these nonlinear techniques to outperform linear methods. In all simulation
studies we generate 20 training trials and 20 testing trials, with 100 simulated neurons and 200 time
bins for each trial. Results are averaged across 10 repeats.
We compare the predictive performance and running times of the algorithms in Table 1. For both
PLDS and GCLDS, our AEVB algorithm gives results comparable to, though slightly worse than, the
4
Table 1: Simulation results with a linear observation model: Each column contains results for a
distinct experiment, where the true data-generating distribution was either Bernoulli, Poisson or
Negative-binomial. For each generative model and inference algorithm (one per row), we report the
predictive log likelihood (PLL) and computation time (in minutes) of the model fit to each dataset.
We report the PLL (divided by number of observations) on test data, using one-step-ahead prediction.
When training a model using the AEVB algorithm, we run 500 epochs before stopping. For LapEM
and VBDual, we initialize with nuclear norm minimization [2] and stop either after 200 iterations or
when the ELBO (scaled by number of time bins) increases by less than ? = 10 9 after one iteration.
Bernoulli
Poisson
Negative-binomial
Model
PLDS
PfLDS
GCLDS
GCfLDS
Inference
LapEM
VBDual
AEVB
AEVB
LapEM
VBDual
AEVB
AEVB
PLL
-0.446
-0.446
-0.445
-0.445
-0.389
-0.389
-0.390
-0.390
Time
3
157
50
56
40
131
69
72
PLL
-0.385
-0.385
-0.387
-0.387
-0.385
-0.385
-0.386
-0.386
Time
5
170
55
58
97
126
75
76
PLL
-0.359
-0.359
-0.363
-0.362
-0.359
-0.359
-0.361
-0.361
Time
5
138
53
50
101
127
73
68
LapEM and VBEM algorithms. Although PfLDS and GCfLDS assume a much more complicated
generative model, both provide comparable predictive performance and running time. We note that
while LapEM is competitive in running time in this relatively small-data setting, the AEVB algorithm
may be more desirable in a large data setting, where it can learn model parameters even before seeing
the full dataset. In constrast, both LapEM and VBDual require a full pass through the data in the
E-step before the M-step parameter updates. The recognition model used by AEVB can also be used
to initialize the LapEM and VBEM in the linear LDS cases.
Simulation with ?grid cell? type response: A grid cell is a type of neuron that is activated when
an animal occupies any vertex of a grid spanning the environment [19]. When an animal moves
along a one-dimensional line in the space, grid cells exhibit oscillatory responses. Motivated by the
response properties of grid cells, we simulated a population of 100 spiking neurons with oscillatory
link functions and a shared, one-dimensional input zrt 2 R given by,
(12)
(13)
zr1 = 0,
zr(t+1) ? N (0.99zrt , 0.01).
The log firing rate of each neuron, indexed by i, is coupled to the latent variable zrt through a sinusoid
with a neuron-specific phase i and frequency !i
xrti ? Poisson (
rit
= exp(2 sin(!i zrt +
i)
2)) .
(14)
We generated i uniformly at random in the region [0, 2?] and set !i = 1 for neurons with index
i ? 50 and !i = 3 for neurons with index i > 50. We simulated 150 training and 20 testing trials,
each with T = 120 time bins. We repeated this simulated experiment 10 times.
We compare performance of PLDS with PfLDS, both with a 1-dimensional latent variable. As
shown in Figure 1, PLDS is not able to adapt to the nonlinear and non-monotonic link function, and
cannot recover the true latent variable (left panel and bottom right panel) or spike rate (upper right
panel). On the other hand the PfLDS model captures the nonlinearity well, recovering the true latent
trajectory. The one-step-ahead predictive log likelihood (PLL) on a held-out dataset for PLDS is
-0.622 (se=0.006), for PfLDS is -0.581 (se=0.006). A paired t-test for PLL is significant (p < 10 6 ).
5.2
Applications to experimentally-recorded neural data
We analyze two multi-neuron spike-train datasets, recorded from primary visual cortex and primary
motor cortex of the macaque brain, respectively. We find that fLDS models outperform PLDS in terms
of predictive performance on held out data. Further, we find that the latent trajectories uncovered by
fLDS are lower-dimensional and more structured than those recovered by PLDS.
5
Neuron #49
PLDS, R2 =0.75
2
PfLDS, R =0.98
1
0.5
0.5
0
0
-1
0
1
Neuron #50
-1
0
1
1.5
Neuron #51
1.5
1
1
0.5
0.5
0
-1
0
1
0
Neuron #52
-1
0
1
True latent variable
True latent variable
Latent variable
Fitted latent variable
True
Firing rate
1
True
PLDS
PfLDS
0
20
40
60
80
100
120
Time
Figure 1: Sample simulation result with ?grid cell? type response. Left panel: Fitted latent variable
compared to true latent variable; Upper right panel: Fitted rate compared to the true rate for 4 sample
neurons; Bottom right panel: Inferred trace of the latent variable compared to true latent trace. Note
that the latent trajectory for a 1-dimensional latent variable is identifiable up to multiplicative constant,
and here we scale the latent variables to lie between 0 and 1.
Macaque V1 with drifting grating stimulus with single orientation: The dataset consists of
148 neurons simultaneously recorded from the primary visual cortex (area V1) of an anesthetized
macaque, as described in [20] (array 5). Data were recorded while the monkey watched a 1280ms
movie of a sinusoidal grating drifting in one of 72 orientations: (0 , 5 , 10 ,...). Each of the 72
orientations was repeated R = 50 times. We analyze the spike activity from 300ms to 1200ms
after stimulus onset. We discretize the data at t = 10ms, resulting in T = 90 timepoints per trial.
Following [20], we consider the 63 neurons with well-behaved tuning-curves. We performed both
single-orientation and whole-dataset analysis.
We first use 12 equal spaced grating orientations (0 , 30 , 60 ,...) and analyze each orientation
separately. To increase sample size, for each orientation we pool data from the 2 neighboring
orientations (e.g. for orientation 0 , we include data from orientation 5 and 355 ), thereby getting
150 trials for each dataset (we find similar, but more variable, results when we do not include
neighboring orientations). For each orientation, we divide the data into 120 training trials and 30
testing trials. For PfLDS we further divide the 120 training trials into 110 trials for fitting and 10
trials for validation (we use the ELBO on validation set to determine when to stop training). We do
not include a stimulus model, but rather perform unsupervised learning to recover a low-dimensional
representation that combines both internal and stimulus-driven dynamics.
We take orientation 0 as an example (the other orientations exhibit a similar pattern) and compare
the fitted result of PLDS and PfLDS with a 2-dimensional latent space, which should in principle
adequately capture the oscillatory pattern of the neural responses. We find that PfLDS is able to
capture the nonlinear response charateristics of V1 complex cells (Fig. 2(a), black line), while
PLDS can only reliably capture linear responses (Fig. 2(a), blue line). In Fig. 2(b)(c) we project
all trajectories onto the 2-dimensional latent manifold described by the PfLDS. We find that both
techniques recover a manifold that reveals the rotational structure of the data; however, by offsetting
the nonlinear features of the data into the observation model, PfLDS recovers a much cleaner latent
representation(Fig. 2(c)).
We assess the model fitting quality by one-step-ahead prediction on a held-out dataset; we compare
both percentage mean squared error (MSE) reduction and negative predictive log likelihood (NLL)
reduction. We find that PfLDS recovers more compact representations than the PLDS, for the same
performance in MSE and NLL. We illustrate this in Fig. 2(d)(e), where PLDS requires approximately
10 latent dimensions to obtain the same predictive performance as an PfLDS with 3 latent dimensions.
This result makes intuitive sense: during the stimulus-driven portion of the experiment, neural activity
is driven primarily by a low-dimensional, oscillatory stimulus drive (the drifting grating). We find
that the highly nonlinear generative models used by PfLDS lead to lower-dimensional and hence
more interpretable latent-variable representations.
To compare the performance of PLDS and PfLDS on the whole dataset, we use 10 trials from each
of the 72 grating orientations (720 trials in total) as a training set, and 1 trial from each orientation
6
(a)
(b) PLDS
(c) PfLDS
Neuron #77
100
50
Time after stimulus onset (ms)
Neuron #115
True
PLDS
PfLDS
50
0
Neuron #145
100
50
0
300
600
900
300
(d)
600
900
1200
(e)
15
% NLL reduction
100
% MSE reduction
Firing rate (spike/s)
0
10
5
0
1200
2
4
6
8
20
15
10
5
0
10
Latent dimensionality
Time after stimulus onset (ms)
2
4
6
8
10
Latent dimensionality
Figure 2: Results for fits to Macaque V1 data (single orientation) (a) Comparing true firing rate (black
line) with fitted rate from PLDS (blue) and PfLDS (red) with 2 dimensional latent space for selected
neurons (orientation 0 , averaged across all 120 training trials); (b)(c) 2D latent-space embeddings of
10 sample training trials, color denotes phase of the grating stimulus (orientation 0 ); (d)(e) Predictive
mean square error (MSE) and predictive negative log likelihood (NLL) reduction with one-step-ahead
prediction, compared to a baseline model (homogeneous Poisson process). Results are averaged
across 12 orientations.
as a test set. For PfLDS we further divide the 720 trials into 648 for fitting and 72 for validation.
We observe in Fig. 3(a)(b) that PfLDS again provides much better predictive performance with a
small number of latent dimensions. We also find that for PfLDS with 4 latent dimensions, when we
projected the observation into the latent space and take the first 3 principal components, the trajectory
forms a torus (Fig. 3(c)). Once again, this result has an intuitive appeal: just as the sinusoidal stimuli
(for a fixed orientation, across time) are naturally embedded into a 2D ring, stimulus variation in
orientation (at a fixed time) also has a natural circular symmetry. Taken together, the stimulus has
a natural toroidal topology. We find that fLDS is capable of uncovering this latent structure, even
without any prior knowledge of the stimulus structure.
15
10
5
0
2
4
6
8
Latent dimensionality
10
(c)
20
15
10
PLDS
PfLDS
5
0
2
4
6
8
Latent dimensionality
10
Grating orientation (degree)
(b)
% NLL reduction
% MSE reduction
(a)
150
100
50
0
500ms after stimulus onset
Figure 3: Macaque V1 data fitting result (full data) (a)(b) Predictive MSE and NLL reduction. (c) 3D
embedding of the mean latent trajectory of the neuron activity during 300ms to 500ms after stimulus
onset across grating orientations 0 , 5 , ..., 175 , here we use PfLDS with 4 latent dimensions and
then project the result on the first 3 principal components. A video for the 3D embedding can be
found at https://www.dropbox.com/s/cluev4fzfsob4q9/video_fLDS.mp4?dl=0
Macaque center-out reaching data: We analyzed the neural population data recorded from the
macaque motor cortex(G20040123), details of which can be found in [11, 1]. Briefly, the data consist
of simultaneous recordings of 105 neurons for 56 cued reaches from the center of a screen to 14
peripheral targets. We analyze the reaching period (50ms before and 370ms after movement onset)
for each trial. We discretize the data at t = 20ms, resulting in T = 21 timepoints per trial. For
each target we use 50 training trials and 6 testing trials and fit all the 14 reaching targets together
(making 700 training trials and 84 testing trials). We use both Poisson and GC noise models, as GC
7
has the flexibility to capture the noted under-dispersion of the data [3]. We compare both PLDS and
PfLDS as well as GCLDS and GCfLDS fits. For both PfLDS and GCfLDS we further divide the
training trials into 630 for fitting and 70 for validation.
As is shown in figure Fig. 4(d), PfLDS and GCfLDS with latent dimension 2 or 3 outperforms their
linear counterparts with much larger latent dimensions. We also find that GCLDS and GCfLDS
models give much better predictive likelihood than their Poisson counterparts. On figure Fig. 4(b)(c)
we project the neural activities on the 2 dimensional latent space. We find that PfLDS (Fig. 4(c))
clearly separates the reaching trajectories and orders them in exact correspondence with the true
spatial location of the targets.
(b) PLDS
(c) PfLDS
(d)
% NLL reduction
(a)Reaching trajectory
12
10
8
PLDS
PfLDS
GCLDS
GCfLDS
6
4
2
4
6
8
Latent dimensionality
Figure 4: Macaque center-out reaching data analysis: (a) 5 sample reaching trajectory for each of
the 14 target locations. Directions are coded by different color, and distances are coded by different
marker size; (b)(c) 2D embeddings of neuron activity extracted by PLDS and PfLDS, circles represent
50ms before movement onset and triangles represent 340ms after movement onset. Here 5 training
reaches for each target location are plotted; (d) Predictive negative log likelihood (NLL) reduction
with one-step-ahead prediction.
6
Discussion and Conclusion
We have proposed fLDS, a modeling framework for high-dimensional neural population data that
extends previous latent, low-dimensional linear dynamical system models with a flexible, nonlinear
observation model. Additionally, we described an efficient variational inference algorithm suitable
for fitting a broad class of LDS models ? including several previously-proposed models. We illustrate
in both simulation and application to real data that, even when a neural population is modulated by a
low-dimensional linear dynamics, a latent variable model with a linear rate function fails to capture
the true low-dimensional structure. In constrast, a fLDS can recover the low-dimensional structure,
providing better predictive performance and more interpretable latent-variable representations.
[21] extends the linear Kalman filter by using neural network models to parameterize both the dynamic
equation and the observation equation, they uses RNN based recognition model for inference. [22]
composes graphical models with neural network observations and proposes structured auto encoder
variational inference algorithm for inference. Ours focus on modeling count observations for neural
spike train data, which is orthogonal to the papers mentioned above.
Our approach is distinct from related manifold learning methods [23, 24]. While most manifold
learning techniques rely primarily on the notion of nearest neighbors, we exploit the temporal structure
of the data by imposing strong prior assumption about the dynamics of our latent space. Further, in
contrast to most manifold learning approaches, our approach includes an explicit generative model
that lends itself naturally to inference and prediction, and allows for count-valued observations that
account for the discrete nature of neural data.
Future work includes relaxing the latent linear dynamical system assumption to incorporate more
flexible latent dynamics (for example, by using a Gaussian process prior [12] or by incorporating a
nonlinear dynamical phase space [25]). We also anticipate our approach may be useful in applications
to neural decoding and prosthetics: once trained, our approximate posterior may be evaluated in close
to real-time.
A Python/Theano [26, 27] implementation of our algorithms is available at http://github.com/
earcher/vilds.
8
References
[1] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani, ?Empirical models of
spiking in neural populations,? in NIPS, pp. 1350?1358, 2011.
[2] D. Pfau, E. A. Pnevmatikakis, and L. Paninski, ?Robust learning of low-dimensional dynamics from large
neural ensembles,? in NIPS, pp. 2391?2399, 2013.
[3] Y. Gao, L. Busing, K. V. Shenoy, and J. P. Cunningham, ?High-dimensional neural spike train analysis
with generalized count linear dynamical systems,? in NIPS, pp. 2035?2043, 2015.
[4] M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V.
Shenoy, ?Neural population dynamics during reaching,? Nature, vol. 487, no. 7405, pp. 51?56, 2012.
[5] R. L. Goris, J. A. Movshon, and E. P. Simoncelli, ?Partitioning neuronal variability,? Nature neuroscience,
vol. 17, no. 6, pp. 858?865, 2014.
[6] A. S. Ecker, P. Berens, R. J. Cotton, M. Subramaniyan, G. H. Denfield, C. R. Cadwell, S. M. Smirnakis,
M. Bethge, and A. S. Tolias, ?State dependence of noise correlations in macaque primary visual cortex,?
Neuron, vol. 82, no. 1, pp. 235?248, 2014.
[7] E. W. Archer, U. Koster, J. W. Pillow, and J. H. Macke, ?Low-dimensional models of neural population
activity in sensory cortical circuits,? in NIPS, pp. 343?351, 2014.
[8] D. P. Kingma and M. Welling, ?Auto-encoding variational bayes,? arXiv preprint arXiv:1312.6114, 2013.
[9] M. Titsias and M. L?zaro-Gredilla, ?Doubly stochastic variational bayes for non-conjugate inference,? in
ICML, pp. 1971?1979, 2014.
[10] D. J. Rezende, S. Mohamed, and D. Wierstra, ?Stochastic backpropagation and approximate inference in
deep generative models,? arXiv preprint arXiv:1401.4082, 2014.
[11] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, ?Gaussian-process
factor analysis for low-dimensional single-trial analysis of neural population activity,? Journal of Neurophysiology, vol. 102, no. 1, pp. 614?635, 2009.
[12] Y. Zhao and I. M. Park, ?Variational latent gaussian process for recovering single-trial dynamics from
population spike trains,? arXiv preprint arXiv:1604.03053, 2016.
[13] L. Buesing, T. A. Machado, J. P. Cunningham, and L. Paninski, ?Clustered factor analysis of multineuronal
spike data,? in NIPS, pp. 3500?3508, 2014.
[14] Y. Burda, R. Grosse, and R. Salakhutdinov, ?Importance weighted autoencoders,? arXiv preprint
arXiv:1509.00519, 2015.
[15] R. Ranganath, S. Gerrish, and D. M. Blei, ?Black box variational inference,? arXiv preprint
arXiv:1401.0118, 2013.
[16] M. D. Zeiler, ?ADADELTA: An adaptive learning rate method,? arXiv preprint arXiv:1212.5701, 2012.
[17] E. Archer, I. M. Park, L. Buesing, J. Cunningham, and L. Paninski, ?Black box variational inference for
state space models,? arXiv preprint arXiv:1511.07367, 2015.
[18] M. Emtiyaz Khan, A. Aravkin, M. Friedlander, and M. Seeger, ?Fast dual variational inference for
non-conjugate latent gaussian models,? in ICML, pp. 951?959, 2013.
[19] T. Hafting, M. Fyhn, S. Molden, M.-B. Moser, and E. I. Moser, ?Microstructure of a spatial map in the
entorhinal cortex,? Nature, vol. 436, no. 7052, pp. 801?806, 2005.
[20] A. B. Graf, A. Kohn, M. Jazayeri, and J. A. Movshon, ?Decoding the activity of neuronal populations in
macaque primary visual cortex,? Nature neuroscience, vol. 14, no. 2, pp. 239?245, 2011.
[21] R. G. Krishnan, U. Shalit, and D. Sontag, ?Deep Kalman filters,? arXiv preprint arXiv:1511.05121, 2015.
[22] M. J. Johnson, D. Duvenaud, A. B. Wiltschko, S. R. Datta, and R. P. Adams, ?Composing graphical models
with neural networks for structured representations and fast inference,? arXiv:1603.06277, 2016.
[23] S. T. Roweis and L. K. Saul, ?Nonlinear dimensionality reduction by locally linear embedding,? Science,
vol. 290, no. 5500, pp. 2323?2326, 2000.
[24] J. B. Tenenbaum, V. De Silva, and J. C. Langford, ?A global geometric framework for nonlinear dimensionality reduction,? science, vol. 290, no. 5500, pp. 2319?2323, 2000.
[25] R. Frigola, Y. Chen, and C. Rasmussen, ?Variational gaussian process state-space models,? in NIPS,
pp. 3680?3688, 2014.
[26] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley,
and Y. Bengio, ?Theano: new features and speed improvements,? arXiv preprint arXiv:1211.5590, 2012.
[27] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley,
and Y. Bengio, ?Theano: a CPU and GPU math expression compiler,? in Proceedings of the Python for
scientific computing conference (SciPy), vol. 4, p. 3, Austin, TX, 2010.
9
| 6430 |@word neurophysiology:1 trial:30 private:1 briefly:1 proportion:2 norm:1 busing:1 seek:1 simulation:8 covariance:3 decomposition:1 q1:3 datagenerating:1 thereby:1 carry:1 reduction:13 initial:1 series:1 contains:1 united:1 uncovered:1 ours:1 interestingly:1 outperforms:1 existing:1 recovered:2 comparing:1 nt:2 com:2 gpu:1 john:1 earcher:1 informative:1 fyhn:1 motor:3 designed:2 interpretable:2 update:1 generative:12 selected:2 ith:4 record:1 blei:1 provides:1 pascanu:2 math:1 location:3 wierstra:1 along:2 consists:1 doubly:1 fitting:7 combine:1 p1:1 nor:1 multi:1 brain:1 salakhutdinov:1 cpu:1 spain:1 project:3 underlying:1 notation:2 panel:6 circuit:1 kaufman:1 pursue:1 monkey:1 unobserved:2 transformation:2 temporal:4 smirnakis:1 rm:2 scaled:1 toroidal:1 partitioning:1 unit:1 shenoy:4 positive:2 before:5 encoding:3 firing:6 approximately:1 black:4 suggests:1 relaxing:1 liam:2 averaged:3 zaro:1 testing:5 offsetting:1 practice:2 block:1 backpropagation:1 xr:21 evan:2 area:2 rnn:1 evolving:1 empirical:1 bergeron:1 seeing:1 suggest:1 get:1 cannot:1 onto:1 close:1 optimize:1 www:1 deterministic:1 ecker:1 center:3 maximizing:1 map:1 flexibly:1 duration:1 focused:3 constrast:2 scipy:1 insight:1 estimator:1 array:1 hafting:1 nuclear:1 lamblin:2 population:24 embedding:4 notion:1 variation:1 analogous:1 laplace:1 target:6 exact:1 homogeneous:1 us:1 goodfellow:1 element:3 adadelta:2 zrt:20 recognition:3 bottom:2 preprint:9 capture:12 parameterize:2 thousand:1 commonplace:1 region:1 movement:3 mentioned:1 environment:1 complexity:1 ideally:1 warde:2 dynamic:11 trained:1 depend:1 churchland:1 predictive:15 purely:1 upon:6 req:1 titsias:1 triangle:1 necessitates:1 easily:1 represented:1 tx:1 train:4 distinct:5 fast:3 describe:1 monte:2 whose:1 richer:1 larger:3 supplementary:1 valued:1 relax:1 elbo:3 encoder:1 ability:1 gi:1 itself:1 nll:8 propose:3 lowdimensional:2 statistics1:1 remainder:1 neighboring:2 flexibility:3 roweis:1 intuitive:2 pll:7 getting:1 generating:4 adam:1 ring:1 cued:1 yuanjun:1 illustrate:3 stat:2 nearest:1 eq:7 grating:8 strong:3 recovering:3 direction:1 aravkin:1 filter:5 stochastic:5 occupies:1 enable:1 material:1 bin:5 require:1 microstructure:1 clustered:1 anticipate:1 considered:1 duvenaud:1 exp:4 desjardins:1 vary:2 estimation:1 pnevmatikakis:1 tool:1 weighted:1 minimization:1 clearly:1 gaussian:11 rrl:1 rather:2 reaching:8 rezende:1 focus:4 improvement:1 prosthetics:1 bernoulli:3 likelihood:8 contrast:1 seeger:1 baseline:2 sense:1 inference:29 stopping:1 nn:1 entire:1 typically:2 cunningham:6 archer:3 arg:2 among:1 dual:2 orientation:24 uncovering:1 flexible:2 proposes:1 animal:2 art:2 spatial:2 initialize:2 marginal:1 equal:1 once:2 identical:1 represents:1 broad:1 yu:2 unsupervised:1 icml:2 park:2 future:1 report:2 stimulus:16 primarily:3 modern:1 randomly:1 simultaneously:2 individual:1 phase:3 consisting:1 prepend:1 highly:1 circular:1 analyzed:1 farley:2 activated:1 held:3 capable:3 necessary:1 intense:1 orthogonal:1 indexed:1 divide:4 circle:1 plotted:1 shalit:1 jazayeri:1 fitted:5 instance:1 column:1 modeling:4 vbem:2 nuyujukian:1 retains:1 vertex:1 entry:1 hundred:1 johnson:1 tridiagonal:1 moser:2 retain:1 decoding:2 pool:1 together:2 bethge:1 squared:1 again:2 recorded:6 choose:1 gclds:7 worse:1 macke:2 zhao:1 grossman:1 account:1 sinusoidal:2 de:1 bergstra:2 includes:2 explicitly:2 onset:8 performed:2 cadwell:1 break:1 closed:1 multiplicative:1 analyze:4 portion:1 competitive:1 recover:6 hampering:1 bayes:4 complicated:1 red:1 bouchard:1 compiler:1 ass:1 square:1 variance:2 characteristic:1 ensemble:1 yield:1 spaced:1 emtiyaz:1 lds:13 buesing:3 comparably:1 iid:1 carlo:2 trajectory:15 drive:1 composes:1 simultaneous:3 oscillatory:4 reach:2 frequency:1 pp:16 mohamed:1 naturally:2 di:2 recovers:2 couple:1 stop:2 dataset:8 popular:1 color:2 knowledge:1 dimensionality:7 feed:1 higher:1 follow:1 permitted:1 response:12 evaluated:1 though:1 box:2 generality:1 just:1 correlation:4 until:1 hand:2 autoencoders:1 langford:1 nonlinear:21 marker:1 logistic:1 quality:1 behaved:1 scientific:1 molden:1 true:15 unbiased:1 counterpart:2 adequately:1 hence:1 inspiration:1 sinusoid:1 sin:1 during:3 noted:1 m:15 generalized:4 performs:1 silva:1 variational:19 wise:1 novel:2 recently:1 superior:1 common:2 machado:1 spiking:3 rl:4 overview:1 mt:3 approximates:2 refer:3 significant:1 imposing:1 tuning:1 grid:6 nonlinearity:2 access:1 cortex:9 posterior:15 own:2 recent:4 multivariate:2 driven:3 visualizable:1 captured:1 impose:2 determine:1 period:1 signal:1 full:3 desirable:1 simoncelli:1 infer:3 reduces:1 smooth:3 adapt:2 rti:4 wiltschko:1 divided:1 goris:1 equally:1 permitting:1 coded:2 paired:1 watched:1 prediction:6 poisson:11 arxiv:19 iteration:2 represent:4 sometimes:1 cell:7 separately:1 pgc:1 extra:2 breuleux:1 ascent:1 recording:3 extracting:1 bengio:2 embeddings:3 easy:1 krishnan:1 fit:7 topology:1 thread:1 motivated:1 expression:1 kohn:1 movshon:2 multineuronal:1 sontag:1 york:1 deep:2 generally:1 useful:1 se:2 k2n:1 cleaner:1 amount:1 ten:1 locally:1 tenenbaum:1 documented:1 generate:1 http:2 outperform:2 percentage:1 xr1:1 neuroscience:2 per:4 blue:2 discrete:4 vol:9 express:2 santhanam:1 neither:1 v1:5 jpc2181:1 run:1 koster:1 parameterized:3 respond:1 extends:2 family:1 pflds:33 comparable:2 capturing:2 layer:1 bound:1 correspondence:1 identifiable:1 activity:11 ahead:5 simulate:1 speed:1 relatively:1 department:1 structured:4 according:1 gredilla:1 peripheral:1 conjugate:4 across:9 slightly:1 em:1 suppressed:1 appealing:1 evolves:1 making:1 restricted:1 pr:1 theano:3 taken:1 computationally:2 equation:2 visualization:1 previously:3 conjugacy:1 count:9 tractable:5 acronym:1 generalizes:1 operation:1 plds:24 permit:4 available:1 observe:1 drifting:3 binomial:3 spurred:1 remaining:1 running:3 include:3 denotes:1 graphical:2 zeiler:1 exploit:1 classical:2 move:1 spike:17 strategy:2 primary:5 dependence:3 diagonal:1 predominate:1 exhibit:2 gradient:4 lends:1 subspace:1 distance:1 link:7 separate:2 simulated:5 manifold:5 spanning:1 g20040123:1 xrt:2 length:2 kalman:5 modeled:1 index:3 providing:2 rotational:1 innovation:1 trace:2 negative:6 implementation:1 reliably:1 zt:1 contributed:1 perform:2 discretize:3 upper:2 neuron:38 observation:22 datasets:5 dispersion:1 denfield:1 dropbox:1 variability:4 ever:1 rn:1 gc:6 arbitrary:3 datta:1 inferred:1 nonlinearly:1 khan:1 pfau:1 cotton:1 ryu:2 barcelona:1 kingma:1 nip:7 macaque:10 able:2 dynamical:15 usually:2 pattern:2 interpretability:1 max:2 video:1 including:1 event:1 suitable:1 natural:2 rely:1 zr:10 scheme:1 movie:1 github:1 temporally:2 auto:4 columbia:5 coupled:1 sahani:2 text:1 prior:5 literature:2 review:1 epoch:1 python:2 friedlander:1 geometric:1 graf:1 embedded:1 expect:1 frigola:1 ingredient:1 validation:4 degree:1 principle:3 foster:1 row:2 austin:1 repeat:1 rasmussen:1 bias:1 burda:1 mismatched:1 wide:1 neighbor:1 taking:2 saul:1 anesthetized:1 distributed:1 curve:1 dimension:9 cortical:1 pillow:1 rich:4 sensory:1 author:1 forward:1 adaptive:1 projected:1 welling:1 ranganath:1 approximate:14 compact:2 global:1 active:1 reveals:1 tolias:1 search:1 latent:75 continuous:1 table:2 additionally:3 learn:5 nature:6 robust:1 composing:1 symmetry:1 mse:6 complex:1 berens:1 aevb:15 linearly:2 whole:2 noise:11 turian:1 repeated:3 body:1 neuronal:3 x1:1 fig:10 screen:1 grosse:1 ny:1 precision:1 fails:1 inferring:2 timepoints:2 torus:1 exponential:3 explicit:1 lie:2 minute:1 xt:1 specific:2 bastien:2 r2:1 appeal:1 normalizing:1 evidence:1 incorporating:2 intractable:1 essential:1 mp4:1 dl:1 consist:1 importance:1 ci:3 entorhinal:1 conditioned:1 chen:1 paninski:3 gao:2 visual:5 monotonic:1 gerrish:1 dispersed:1 extracted:1 minibatches:1 goal:1 dzr:1 shared:2 experimentally:1 typical:1 except:1 uniformly:1 principal:2 called:2 zr1:2 pas:1 total:1 experimental:3 select:1 rit:1 internal:1 cholesky:1 modulated:2 incorporate:1 evaluate:1 correlated:2 |
6,004 | 6,431 | Improved Error Bounds for Tree Representations of
Metric Spaces
Samir Chowdhury
Department of Mathematics
The Ohio State University
Columbus, OH 43210
chowdhury.57@osu.edu
Facundo M?moli
Department of Mathematics
Department of Computer Science and Engineering
The Ohio State University
Columbus, OH 43210
memoli@math.osu.edu
Zane Smith
Department of Computer Science and Engineering
The Ohio State University
Columbus, OH 43210
smith.9911@osu.edu
Abstract
Estimating optimal phylogenetic trees or hierarchical clustering trees from metric
data is an important problem in evolutionary biology and data analysis. Intuitively,
the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as
well as other metric properties such as intrinsic dimension. Existing algorithms for
embedding metric spaces into tree metrics provide distortion bounds depending on
cardinality. Because cardinality is a simple property of any set, we argue that such
bounds do not fully capture the rich structure endowed by the metric. We consider
an embedding of a metric space into a tree proposed by Gromov. By proving a
stability result, we obtain an improved additive distortion bound depending only on
the hyperbolicity and doubling dimension of the metric. We observe that Gromov?s
method is dual to the well-known single linkage hierarchical clustering (SLHC)
method. By means of this duality, we are able to transport our results to the setting
of SLHC, where such additive distortion bounds were previously unknown.
1
Introduction
Numerous problems in data analysis are formulated as the question of embedding high-dimensional
metric spaces into ?simpler" spaces, typically of lower dimension. In classical multidimensional
scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean
space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis,
because it allows one to find hidden groupings in amorphous data by simple visual inspection.
Generalizations of MDS exist for which the target space can be a tree metric space?see [13] for a
summary of some of these approaches, written from the point of view of metric embeddings. The
metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains
made possible by embedding a complicated metric space into a simpler one [13].
The special case of MDS where the target space is a tree has been of interest in phylogenetics for
quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree
embedding for a given metric space (X, dX ), i.e. a tree (X, tX ) such that the additive distortion,
defined as kdX ? tX k`? (X?X) , is minimal over all possible tree metrics on X. This problem turns
out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric
tree is a rooted tree where every point is equidistant from the root?for example, ultrametric trees
are the outputs of hierarchical clustering (HC) methods that show groupings in data across different
resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage
hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16],
thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding
NTP does not address the question of quantifying the `? distance between a metric (X, dX ) and its
optimal tree metric, or even the optimal ultrametric. More specifically, we can ask:
Question 1. Given a set X, a metric dX , and an optimal tree metric topt
X (or an optimal ultrametric
opt
opt
?
uopt
),
can
one
find
a
nontrivial
upper
bound
on
kd
?
t
k
X
X
X ` (X?X) (resp. kdX ? uX k`? (X?X) )
depending on properties of the metric dX ?
The question of distortion bounds is treated from a different perspective in the discrete algorithms
literature. In this domain, tree embeddings are typically described with multiplicative distortion
bounds (described in ?2) depending on the cardinality of the underlying metric space, along with
(typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark
immediately that (1) multiplicative distortion is distinct from the additive distortion encountered
in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS
methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take
two considerations into account: (1) the ubiquitousness of very large data sets means that a bound
dependent on cardinality is not desirable, and (2) ?nice" properties such as low intrinsic dimensionality
or treeness of real-world datasets are not exploited in cardinality bounds.
We prove novel additive distortion bounds for two methods of tree embeddings: one into general
trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether
the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its
intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation
error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially.
Remark 1. The trivial upper bound is kdX ? topt
X k`? (X?X) ? diam(X, dX ). To see this, observe
that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX .
An overview of our approach. A common measure of treeness is Gromov?s ?-hyperbolicity, which
is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful
statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used
by Gromov to embed metric spaces into trees, which we call Gromov?s embedding [12]. A known
result, which we call Gromov?s embedding theorem, is that if every 4-point subset of an n-point
metric space is ?-hyperbolic, then the metric space embeds into a tree with `? distortion bounded
above by 2? log2 (2n). The proof proceeds by a linkage argument, i.e. by invoking the definition
of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem,
one can argue that hyperbolicity is a measure of the ?treeness" of a given metric space. It has been
shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and
collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov?s result,
these real-world data sets can be embedded into trees with additive distortion controlled by their
respective cardinalities. The cardinality bound might of course be undesirable, especially for very
large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov?s
embedding can yield a 3-approximation to the NTP, independent of [3].
We note that the assumption of a metric input is not apparent in Gromov?s embedding theorem.
Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for
bounds where the dependence on cardinality is replaced by a dependence on some metric notion.
A natural candidate for such a metric notion is the doubling dimension of a space [15], which has
already found applications in learning [17] and algorithm design [15]. In this paper, we present novel
upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity
and doubling dimension of the metric space.
Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This
result is then combined with the results of Gromov?s linkage argument. Both the stability theorem
and Gromov?s theorem rely on the embedding satisfying a particular linkage condition, which can
be described as follows: for any embedding f : (X, dX ) ? (X, tX ), and any x, x0 ? X, we have
k
tX (x, x0 ) = maxc mini ?(xi , xi+1 ), where c = {xi }i=0 is a chain of points joining x to x0 and ?
2
is some function of dX . A dual notion is to flip the order of the max, min operations. Interestingly,
under the correct objective function ?, this leads to the well-studied notion of SLHC. By virtue of this
duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting.
We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and
are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity,
for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC.
We remark that just by virtue of the duality between Gromov?s embedding and the SLHC embedding,
it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to
find such a bound in the existing HC literature, so it appears that even the knowledge of this duality,
which bridges the domains of HC and MDS methods, is not prevalent in the community.
The paper is organized as follows. The main thrust of our work is explained in ?1. In ?2 we
develop the context of our work by highlighting some of the surrounding literature. We provide
all definitions and notation, including the Voronoi partition construction, in ?3. In ?4 we describe
Gromov?s embedding and present Gromov?s distortion bound in Theorem 3. Our contributions begin
with Theorem 4 in ?4 and include all the results that follow: namely the stability results in ?5, the
improved distortion bounds in ?6, and the proof of tightness in ?7.
The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a
practical demonstration in ?A where we apply Gromov?s embedding to a bitmap image of a tree
and show that our upper bounds perform better than the bounds suggested by Gromov?s embedding
theorem, and (3) Matlab .m files containing demos of Gromov?s embedding being applied to various
images of trees.
2
Related Literature
MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the
data X into a low dimensional Euclidean space given by a point cloud Y ? Rd (where often d = 2
or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the
Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred
to as ordinal embedding, and has been studied in [14].
A common problem with metric MDS is that when the intrinsic dimension of the data is higher than
the embedding dimension, the clustering in the original data may not be preserved [21]. One variant
of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve
the single linkage (SL) dendrogram from the original data. This is especially important for certain
types of biological data, for the following reasons: (1) due to speciation, many biological datasets are
inherently ?treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree
embedding [16], so intuitively, preserving the SL dendrogram preserves the ?treeness" of the data.
Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into
a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of
a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8].
The quality of an embedding is measured by computing its distortion, which has different definitions
in different domain areas. Typically, a tree embedding is defined to be an injective map f : X ? Y
between metric spaces (X, dX ) and (Y, tY ), where the target space is a tree. We have defined the
additive distortion of a tree embedding in an `? setting above, but `p notions, for p ? [1, ?), can
also be defined. Past efforts to embed a metric into a tree with low additive distortion are described
in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the
domain of discrete algorithms and is not our focus in the current work.
3
Preliminaries on metric spaces, distances, and doubling dimension
A finite metric space (X, dX ) is a finite set X together with a function dX : X ? X ? R+
such that: (1) dX (x, x0 ) = 0 ?? x = x0 , (2) dX (x, x0 ) = dX (x0 , x), and (3) dX (x, x0 ) ?
dX (x, x00 ) + dX (x00 , x0 ) for any x, x0 , x00 ? X. A pointed metric space is a triple (X, dX , p), where
(X, dX ) is a finite metric space and p ? X. All the spaces we consider are assumed to be finite.
3
For a metric space (X, dX ), the diameter is defined to be diam(X, dX ) := maxx,x0 ?X dX (x, x0 ).
The hyperbolicity of (X, dX ) was defined by Gromov [12] as follows:
hyp(X, dX ) :=
?hyp
X (x1 , x2 , x3 , x4 ) : =
max
x1 ,x2 ,x3 ,x4 ?X
1
2
?hyp
X (x1 , x2 , x3 , x4 ), where
dX (x1 , x2 ) + dX (x3 , x4 )
? max dX (x1 , x3 ) + dX (x2 , x4 ), dX (x1 , x4 ) + dX (x2 , x3 )
.
A tree metric space (X, tX ) is a finite metric space such that hyp(X, tX ) = 0 [19]. In our work, we
strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that
an ultrametric space (X, uX ) is a metric space satisfying the strong triangle inequality:
uX (x, x0 ) ? max(uX (x, x00 ), uX (x00 , x0 )), ?x, x0 , x00 ? X.
Definition 1. We define the ultrametricity of a metric space (X, dX ) as:
ult(X, dX ) :=
max
x1 ,x2 ,x3 ?X
?ult
X (x1 , x2 , x3 ), where
?ult
X (x1 , x2 , x3 ) := dX (x1 , x3 ) ? max dX (x1 , x2 ), dX (x2 , x3 ) .
We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice
that (X, uX ) is an ultrametric space if and only if ult(X, uX ) = 0. One can verify that an ultrametric
space is a tree metric space.
We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d0X
defined on X ? X, we denote the `? distance between dX and d0X as follows:
kdX ? d0X k`? (X?X) := max
|dX (x, x0 ) ? d0X (x, x0 )|.
0
x,x ?X
We use the shorthand kdX ?d0X k? to mean kdX ?d0X k`? (X?X) . We write ? to mean ?approximately
equal to." Given two functions f, g : N ? R, we will write f g to mean asymptotic tightness, i.e.
that there exist constants c1 , c2 such that c1 |f (n)| ? |g(n)| ? c2 |f (n)| for sufficiently large n ? N.
Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a
Voronoi partition construction. Given a metric space (X, dX ) and a subset A ? X, possibly with its
own metric dA , we can define a new metric dA
X on X ? X using a Voronoi partition. First write A =
{x1 , . . . , xn }. For each 1 ? i ? n, we define Vei := {x ? X : dX (x, xi ) ? minj6=i dX (x, xj )} .
Sn
Then X = i=1 Vei . Next we perform the following disjointification trick:
V1 := Ve1 , V2 := Ve2 \ Ve1 , . . . , Vn := Ven \
n?1
[
Vei .
i=1
Then X =
Fn
i=1 Vi , a disjoint union of Voronoi cells Vi .
Next define the nearest-neighbor map ? : X ? A by ?(x) = xi for each x ? Vi . The map ?
simply sends each x ? X to its closest neighbor in A, up to a choice when there are multiple nearest
neighbors. Then we can define a new (pseudo)metric dA
X : X ? X ? R+ as follows:
0
0
dA
X (x, x ) := dA (?(x), ?(x )).
0
0
Observe that dA
X (x, x ) = 0 if and only if x, x ? Vi for some 1 ? i ? n. Symmetry also holds, as
does the triangle inequality.
A special case of this construction occurs when A is an ?-net of X endowed with a restriction of
the metric dX . Given a finite metric space (X, dX ), an ?-net is a subset X ? ? X such that: (1)
for any x ? X, there exists s ? X ? such that dX (x, s) < ?, and (2) for any s, s0 ? X ? , we have
?
dX (s, s0 ) ? ? [15]. For notational convenience, we write d?X to refer to dX
X . In this case, we obtain:
kdX ? d?X k`? (X?X) = max dX (x, x0 ) ? d?X (x, x0 )
x,x0 ?X
= max
max
0
dX (x, x0 ) ? d?X (x, x0 )
= max
max
0
dX (x, x0 ) ? dX (xi , xj )
? max
max
0
1?i,j?n x?Vi ,x ?Vj
1?i,j?n x?Vi ,x ?Vj
1?i,j?n x?Vi ,x ?Vj
4
dX (x, xi ) + dX (x0 , xj ) ? 2?.
(1)
Covering numbers and doubling dimension. For a finite metric space (X, dX ), the open ball of
radius ? centered at x ? X is denoted B(x, ?). The ?-covering number of (X, dX ) is defined as:
n
[
NX (?) := min n ? N : X ?
B(xi , ?) for x1 , . . . , xn ? X .
i=1
Notice that the ?-covering number of X is always bounded above by the cardinality of an ?-net. A
related quantity is the doubling dimension ddim(X, dX ) of a metric space (X, dX ), which is defined
to be the minimal value ? such that any ?-ball in X can be covered by at most 2? ?/2-balls [15]. The
covering number and doubling dimension of a metric space (X, dX ) are related as follows:
Lemma 2. Let (X, dX ) be a finite metric space with doubling dimension
bounded above by ? > 0.
?
Then for all ? ? (0, diam(X)], we have NX (?) ? 8 diam(X)/? .
4
Duality between Gromov?s embedding and SLHC
Given a metric space (X, dX ) and any two points x, x0 ? X, we define a chain from x to x0 over X
as an ordered set of points in X starting at x and ending at x0 :
c = {x0 , x1 , x2 , . . . , xn : x0 = x, xn = x0 , xi ? X for all 0 ? i ? n} .
The set of all chains from x to x0 over X will be denoted CX (x, x0 ). The cost of a chain c =
{x0 . . . , xn } over X is defined to be costX (c) := max0?i<n dX (xi , xi+1 ).
For any metric space (X, dX ) and any p ? X, the Gromov product of X with respect to p is a map
gX,p : X ? X ? R+ defined by:
gX,p (x, x0 ) := 12 dX (x, p) + dX (x0 , p) ? dX (x, x0 ) .
T
: X ? X ? R+ as follows:
We can define a map gX,p
T
gX,p
(x, x0 )p :=
max
min
c?CX (x,x0 ) xi ,xi+1 ?c
gX,p (xi , xi+1 ).
This induces a new metric tX,p : X ? X ? R+ :
T
tX,p (x, x0 ) := dX (x, p) + dX (x0 , p) ? 2gX,p
(x, x0 ).
Gromov observed that the space (X, tX,p ) is a tree metric space, and that tX,p (x, x0 ) ? dX (x, x0 )
for any x, x0 ? X [12]. This yields the trivial upper bound:
kdX ? tX k? ? diam(X, dX ).
The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) :=
(X, tX,p ). Note that each choice of p ? X will yield a tree metric tX,p that depends, a priori, on p.
Theorem 3 (Gromov?s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space,
and let (X, tX,p ) = T (X, dX , p). Then,
ktX,p ? dX kl? (X?X) ? 2 log2 (2n) hyp(X, dX ).
Gromov?s embedding is an MDS method where the target is a tree. We observe that its construction is
dual?in the sense of swapping max and min operations?to the construction of the ultrametric space
produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space
(X, dX ) as H(X, dX ) = (X, uX ), where uX : X ? X ? R+ is the ultrametric defined below:
uX (x, x0 ) :=
min 0 costX (c).
c?CX (x,x )
As a consequence of this duality, we can bound the additive distortion of SLHC as below:
Theorem 4. Let (X, dX ) be an n-point metric space, and let (X, uX ) = H(X, dX ). Then we have:
kdX ? uX k`? (X?X) ? log2 (2n) ult(X, dX ).
Moreover, this bound is asymptotically tight.
The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along
chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4
depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a
natural improvement would be to exploit a global property that takes into account the metric structure
of the underlying space. The first step in this improvement is to prove a set of stability theorems.
5
5
Stability of SLHC and Gromov?s embedding
It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and
complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC
involving the `? distance, and then we exploit the duality observed in ?4 to prove a similar stability
result for Gromov?s embedding.
Theorem 5. Let (X, dX ) be a metric space, and let (A, dA ) be any subspace with the restriction
metric dA := dX |A?A . Let H denote the SLHC method. Write (X, uX ) = H(X, dX ) and (A, uA ) =
0
0
0
H(A, dA ). Also write uA
X (x, x ) := uA (?(x), ?(x )) for x, x ? X. Then we have:
A
kH(X, dX ) ? H(A, dA )k? := kuX ? uA
X k? ? kdX ? dX k? .
Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA , a) be any subspace with the
restriction metric dA := dX |A?A such that ?(p) = a. Let T denote the Gromov embedding. Write
0
0
(X, tX,p ) = T (X, dX , p) and (A, tA,a ) = T (A, dA , a). Also write tA
X,p (x, x ) := tA,a (?(x), ?(x ))
0
for x, x ? X. Then we have:
A
kT (X, dX , p) ? T (A, dA , a)k? := ktX,p ? tA
X,p k? ? 5kdX ? dX k? .
The proofs for both of these results use similar techniques, and we present them in Appendix B.
6
Improvement via Doubling Dimension
Our main theorems, providing additive distortion bounds for Gromov?s embedding and for SLHC,
are stated below. The proofs for both theorems are similar, so we only present that of the former.
Theorem 7. Let (X, dX ) be a n-point metric space with doubling dimension ?, hyperbolicity
hyp(X, dX ) = ?, and diameter D. Let p ? X, and write (X, tX ) = T (X, dX , p). Then we obtain:
Covering number bound:
kdX ? tX k? ? min 12? + 2? log2 (2NX (?)) .
(2)
??(0,D]
Also suppose D ?
??
6 ln 2 .
Then,
Doubling dimension bound:
kdX ? tX k? ? 2? + 2??
13
2
+ log2
D
??
.
(3)
Theorem 8. Let (X, dX ) be a n-point metric space with doubling dimension ?, ultrametricity
ult(X, dX ) = ?, and diameter D. Write (X, uX ) = H(X, dX ). Then we obtain:
Covering number bound:
kdX ? uX k? ? min 4? + ? log2 (2NX (?)) .
(4)
??(0,D]
Also suppose D ?
??
4 ln 2 .
Then,
Doubling dimension bound:
D
kdX ? uX k? ? ? + ?? 6 + log2 ??
.
(5)
Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some
answers. Consider a metric space (X, dX ). We can upper bound kdX ? uopt
X k? using the bounds in
Theorem 8, and kdX ? topt
k
using
the
bounds
in
Theorem
7.
?
X
Remark 10 (A remark on parameters). Notice that as hyperbolicity ? approaches 0 (or ultrametricity
approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also
note that as ? ? 0, we get NX (?) ? |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead
us to believe that the interesting range of ? values is typically a subinterval of (0, D].
Proof of Theorem 7. Fix ? ? (0, D] and let X ? = {x1 , x2 , ..., xk } be a collection of k = NX (?)
points that form an ?-net of X. Then we may define d?X and t?X on X ? X as in ?3. Subsequent
application of Theorem 3 and Lemma 2 gives the bound
kd?X ? t?X k`? (X?X) ? kdX ? ? tX ? k`? (X ? ?X ? ) ? 2? log2 (2k) ? 2? log2 (2C??? ),
6
where we define C := (8D)? . Then by the triangle inequality for the `? distance, the stability of T
(Theorem 6), and using the result that kdX ? d?X k`? (X?X) ? 2? (Inequality 1), we get:
kdX ? tX k? ? kdX ? d?X k? + kd?X ? t?X k? + kt?X ? tX k?
? 6kdX ? d?X k? + kd?X ? t?X k?
? 12? + 2? log2 (2NX (?)).
Since ? ? (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields:
kdX ? tX k? ? 12? + 2? log2 (2C??? ).
Notice that C??? ? NX (?) ? 1, so the term on the right of the inequality above is positive. Consider
the function
f (?) = 12? + 2? + 2? log2 C ? 2?? log2 ?.
The minimizer of this function is obtained by taking a derivative with respect to ?:
f 0 (?) = 12 ?
2??
??
= 0 =? ? =
.
? ln 2
6 ln 2
Since ? takes values in (0, D], and lim??0 f (?) = +?, the value of f (?) is minimized at
??
min(D, 6 ??
ln 2 ). By assumption, D ? 6 ln 2 . Since kdX ? tX k? ? f (?) for all ? ? (0, D], it
follows that:
??
2??
48D ln 2
13
D
kdX ?tX k? ? f
=
+2? +2?? log2
? 2? +2??
+ log2
.
6 ln 2
ln 2
??
2
??
7
Tightness of our bounds in Theorems 7 and 8
By the construction provided below, we show that our covering number bound for the distortion of
SLHC is asymptotically tight. A similar construction can be used to show that our covering number
bound for Gromov?s embedding is also asymptotically tight.
Proposition 11. There exists a sequence (Xn , dXn )n?N of finite metric spaces such that as n ? ?,
kdXn ? uXn k? min 4? + ?n log2 (2NXn (?)) ? 0.
??(0,Dn ]
Here we have written (Xn , uXn ) = H(Xn , dXn ), ?n = ult(Xn , dXn ), and Dn = diam(Xn , dXn ).
Proof of Proposition 11. After defining Xn for n ? N below, we will denote the error term, our
covering number upper bound, and our Gromov-style upper bound as follows:
En := kdXn ? uXn k? ,
Bn :=
min ?(n, ?),
??(0,Dn ]
Gn := log2 (2|Xn |) ult(Xn , dXn ), where
? : N ? [0, ?) ? R is defined by ?(n, ?) = 4? + ?n log2 (2NXn (?)).
Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space
(X, dX ) is the quantity sep(X, dX ) := minx6=x0 ?X dX (x, x0 ). Let (V, uV ) be the finite ultrametric
space consisting of two equidistant points with common distance 1. For each n ? N, let Ln denote
the line metric space obtained by choosing (n + 1) equally spaced points with separation n12 from the
interval [0, n1 ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One
1
can verify that ult(Ln , dLn ) ? 2n
. Finally, for each n ? N we define Xn := V ? Ln , and endow
Xn with the following metric:
dXn (v, l), (v 0 , l0 ) := max dV (v, v 0 ), dLn (l, l0 ) , v, v 0 ? V, l, l0 ? Ln .
Claim 1. ult(Xn , dXn ) = ult(Ln , dLn ) ?
1
2n .
For a proof, see Appendix B.
Claim 2. En diam(Ln , dLn ) = n1 . To see this, let n ? N, and let x = (v, l), x0 = (v 0 , l0 ) ? Xn
be two points realizing En . Suppose first that v = v 0 . Then an optimal chain from (v, l), (v, l0 ) only
7
has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn (x, x0 ) ?
equality if and only if l 6= l0 . Then,
En = max
|dXn (x, x0 ) ? uXn (x, x0 )| = max
|dLn (l, l0 ) ?
0
0
x,x ?Xn
l,l ?Ln
1
n2 |
=
1
n
?
1
n2
1
n2 ,
with
1
n.
Note that the case v 6= v 0 is not allowed, because then we would obtain dXn (x, x0 ) = dV (v, v 0 ) =
uXn (x, x0 ), since sep(V, dV ) ? diam(Ln , dLn ) and all the points in V are equidistant. Thus we
would obtain |dXn (x, x0 ) ? uXn (x, x0 )| = 0, which is a contradiction because we assumed that x, x0
realize En .
Claim 3. For each n ? N, ? ? (0, Dn ], we have:
?
: ? > sep(V, dV ),
?NV (?)
NXn (?) = |V |
: diam(Ln , dLn ) < ? ? sep(V, dV ),
?
|V |NLn (?) : ? ? diam(Ln , dLn ).
To see this, note that in the first two cases, any ?-ball centered at a point (v, l) automatically contains
all of {v} ? Ln , so NXn (?) = NV (?). Specifically in the range diam(Ln , dLn ) < ? ? sep(V, dV ),
we need exactly one ?-ball for each v ? V to cover Xn . Finally in the third case, we need NLn (?)
?-balls to cover {v} ? Ln for each v ? V . This yields the stated estimate.
By the preceding claims, we now have the following for each n ? N, ? ? (0, Dn ]:
?
1
: ? > sep(V ),
?4? + 2n log2 (2NV (?))
1
1
?(n, ?) ? 4? + 2n log2 (2NXn (?)) = 4? + 2n
log2 (2|V |)
: diam(Ln ) < ? ? sep(V ),
?
1
4? + 2n
log2 (2|V |NLn (?)) : ? ? diam(Ln ).
Notice that for sufficiently large n, inf ?>diam(Ln ) ?(n, ?) = ?(n, n1 ). Then we have:
1
n
? En ? Bn =
min ?(n, ?) ? ?(n, n1 ) ?
??(0,Dn ]
C
n,
for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second
from Theorem 8, and the third from our observation above. It follows that En Bn n1 ? 0.
Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as:
Gn = ?(n, 0) =
1
2n
log2 (2|V |(n + 1)) ? C 0 log2 (n+1)
,
n
for some constant C 0 > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn .
8
Discussion
We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion
incurred when passing from a metric to its optimal tree embedding. We describe and explore a
duality between a tree embedding method proposed by Gromov and the well known SLHC method
for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel
metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the
SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound
by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a
family of examples proving tightness of this dimension-based bound. By invoking duality again, we
are able to improve Gromov?s original bound on the distortion of his tree embedding method. More
specifically, we replace the dependence on cardinality?a set-theoretic notion?by a dependence on
doubling dimension, which is truly a metric notion.
Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that
it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov?s bound can perform
arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this
claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods.
8
References
[1] Ittai Abraham, Mahesh Balakrishnan, Fabian Kuhn, Dahlia Malkhi, Venugopalan Ramasubramanian, and Kunal Talwar. Reconstructing approximate tree metrics. In Proceedings of the
26th annual ACM symposium on Principles of distributed computing. ACM, 2007.
[2] Muad Abu-Ata and Feodor F. Dragan. Metric tree-like structures in real-life networks: an
empirical study. arXiv preprint arXiv:1402.3364, 2014.
[3] Richa Agarwala, Vineet Bafna, Martin Farach, Mike Paterson, and Mikkel Thorup. On the
approximability of numerical taxonomy (fitting distances by tree metrics). SIAM Journal on
Computing, 28(3):1073?1085, 1998.
[4] Yair Bartal. Probabilistic approximation of metric spaces and its algorithmic applications. In
Foundations of Computer Science. IEEE, 1996.
[5] Jean-Pierre Barth?lemy and Alain Gu?noche. Trees and proximity representations. 1991.
[6] Gunnar Carlsson and Facundo M?moli. Characterization, stability and convergence of hierarchical clustering methods. The Journal of Machine Learning Research, 2010.
[7] John Chakerian and Susan Holmes. Computational tools for evaluating phylogenetic and
hierarchical clustering trees. Journal of Computational and Graphical Statistics, 2012.
[8] Victor Chepoi and Bernard Fichet. `? approximation via subdominants. Journal of mathematical psychology, 44(4):600?616, 2000.
[9] Michel Marie Deza and Elena Deza. Encyclopedia of distances. Springer, 2009.
[10] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary
metrics by tree metrics. In Proceedings of the thirty-fifth annual ACM symposium on Theory of
computing, pages 448?455. ACM, 2003.
[11] Martin Farach, Sampath Kannan, and Tandy Warnow. A robust model for finding optimal
evolutionary trees. Algorithmica, 13(1-2):155?179, 1995.
[12] Mikhael Gromov. Hyperbolic groups. Springer, 1987.
[13] Piotr Indyk and Jiri Matousek. Low-distortion embeddings of finite metric spaces. in in
handbook of discrete and computational geometry, pages 177?196, 2004.
[14] Matth?us Kleindessner and Ulrike von Luxburg. Uniqueness of ordinal embedding. In COLT,
pages 40?67, 2014.
[15] Robert Krauthgamer and James R Lee. Navigating nets: simple algorithms for proximity search.
In Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, pages
798?807. Society for Industrial and Applied Mathematics, 2004.
[16] Mirko Krivanek. The complexity of ultrametric partitions on graphs. Information processing
letters, 27(5):265?270, 1988.
[17] Yi Li and Philip M. Long. Learnability and the doubling dimension. In Advances in Neural
Information Processing Systems, pages 889?896, 2006.
[18] Kantilal Varichand Mardia, John T. Kent, and John M. Bibby. Multivariate analysis. 1980.
[19] Charles Semple and Mike A. Steel. Phylogenetics, volume 24. Oxford University Press on
Demand, 2003.
[20] Albert D. Shieh, Tatsunori B. Hashimoto, and Edoardo M. Airoldi. Tree preserving embedding. Proceedings of the National Academy of Sciences of the United States of America,
108(41):16916?16921, 2011.
[21] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine
Learning Research, 9(2579-2605):85, 2008.
9
| 6431 |@word polynomial:1 norm:1 open:1 bn:4 kent:1 invoking:3 contains:2 united:1 interestingly:1 past:1 existing:3 bitmap:1 current:1 ddim:1 dx:99 written:2 john:3 realize:1 fn:1 numerical:4 additive:12 partition:6 thrust:1 subsequent:1 noche:1 lemy:1 inspection:1 xk:1 smith:2 realizing:1 characterization:2 math:1 gx:6 simpler:2 phylogenetic:2 mathematical:1 along:4 c2:2 dn:6 symposium:3 jiri:1 prove:8 shorthand:1 fitting:1 introduce:2 x0:56 indeed:1 automatically:1 cardinality:16 ua:4 spain:1 estimating:1 bounded:5 underlying:2 moreover:3 notation:1 begin:1 provided:1 finding:4 ve1:2 pseudo:1 every:2 multidimensional:1 tie:1 exactly:1 enjoy:1 positive:1 engineering:2 local:2 consequence:1 joining:1 oxford:1 hyperbolicity:12 approximately:1 might:1 studied:3 matousek:1 range:2 practical:2 thirty:1 union:1 x3:11 area:1 empirical:1 maxx:1 hyperbolic:2 get:2 convenience:1 undesirable:1 minj6:1 context:1 applying:1 writing:1 restriction:4 map:5 starting:2 resolution:1 immediately:1 semple:1 contradiction:1 holmes:1 oh:3 his:1 embedding:46 proving:2 n12:1 stability:13 exploratory:1 notion:8 analogous:1 ultrametric:25 resp:1 target:4 construction:7 suppose:3 strengthen:1 kunal:2 trick:1 satisfying:2 observed:2 cloud:1 mike:2 preprint:1 solved:1 capture:1 susan:1 complexity:1 depend:1 tight:7 incur:1 triangle:3 gu:1 hashimoto:1 sep:7 facundo:2 tx:24 various:3 chapter:1 america:1 surrounding:1 distinct:1 describe:2 choosing:1 quite:1 apparent:1 supplementary:1 jean:1 distortion:30 tightness:4 statistic:4 indyk:1 tatsunori:1 moli:2 sequence:1 net:5 propose:1 product:1 academy:1 frobenius:1 kh:1 convergence:1 cluster:1 bartal:1 produce:1 depending:8 develop:1 measured:2 nearest:2 strong:1 involves:1 quantify:1 kuhn:1 laurens:1 radius:1 correct:1 centered:2 material:1 fix:1 generalization:1 suffices:1 preliminary:1 opt:2 biological:3 proposition:3 strictly:2 hold:1 proximity:2 sufficiently:2 algorithmic:2 claim:6 omitted:1 uniqueness:1 kleindessner:1 bridge:1 tool:2 hope:1 always:1 endow:1 l0:7 focus:1 notational:1 improvement:3 prevalent:1 industrial:1 sense:1 helpful:1 dependent:1 voronoi:6 treelike:3 typically:6 hidden:1 agarwala:1 dual:3 colt:1 denoted:3 priori:1 special:3 equal:1 construct:1 piotr:1 biology:1 x4:6 ven:1 minimized:2 np:1 inherent:1 pathological:1 preserve:3 national:1 replaced:1 algorithmica:1 consisting:1 geometry:1 n1:5 attempt:1 hyp:6 interest:1 chowdhury:2 truly:1 swapping:1 chain:7 kt:2 partial:1 injective:1 respective:1 tree:59 euclidean:3 minimal:2 gn:3 rao:1 cover:2 bibby:1 goodness:1 cost:2 deviation:1 subset:4 kdx:25 satish:1 learnability:1 answer:3 subdominant:1 combined:1 thoroughly:1 siam:2 vineet:1 lee:1 probabilistic:1 together:1 again:1 von:1 containing:1 minx6:1 possibly:1 mikkel:1 worse:1 derivative:1 style:4 return:1 michel:1 li:1 account:3 vei:3 depends:2 vi:7 multiplicative:4 view:1 root:1 ulrike:1 complicated:1 amorphous:1 contribution:1 yield:6 spaced:1 farach:2 produced:1 venugopalan:1 maxc:1 definition:5 ty:1 topt:3 james:1 proof:13 gain:1 ask:1 recall:3 knowledge:1 lim:1 dimensionality:1 organized:1 d0x:6 back:1 barth:1 appears:2 higher:1 ta:4 follow:1 improved:3 just:2 dendrogram:4 transport:1 replacing:1 quality:2 columbus:3 believe:1 verify:3 former:1 hence:1 equality:1 visualizing:1 rooted:1 covering:10 complete:1 theoretic:1 feodor:1 workhorse:1 image:2 consideration:1 novel:5 ohio:3 charles:1 common:4 endowing:1 behaves:1 overview:1 volume:1 mahesh:1 refer:1 counterexample:1 rd:1 uv:1 mathematics:3 pointed:4 moving:1 ktx:2 closest:1 own:1 multivariate:1 perspective:1 inf:1 claimed:1 ntp:6 certain:1 inequality:7 nln:3 arbitrarily:1 life:1 yi:1 exploited:1 victor:1 der:1 preserving:5 preceding:3 multiple:1 desirable:1 long:1 equally:1 controlled:1 variant:3 involving:1 metric:101 fifteenth:1 arxiv:2 albert:1 cell:1 c1:2 preserved:1 whereas:1 interval:1 sends:1 file:1 nv:3 induced:2 balakrishnan:1 dxn:10 nontrivially:1 call:4 paterson:1 embeddings:6 xj:3 fit:1 equidistant:3 psychology:1 reduce:1 regarding:1 coordinate:1 whether:2 motivated:2 linkage:8 effort:1 edoardo:1 passing:1 remark:10 matlab:1 useful:1 latency:1 covered:1 encyclopedia:1 induces:1 diameter:3 sl:4 exist:2 notice:5 dln:10 disjoint:1 discrete:4 write:11 abu:1 group:1 key:1 gunnar:1 marie:1 utilize:1 v1:1 asymptotically:4 graph:1 luxburg:1 talwar:2 letter:1 family:1 vn:1 separation:2 maaten:1 appendix:5 scaling:2 bound:55 internet:2 encountered:1 annual:3 nontrivial:1 x2:13 aspect:1 argument:3 min:11 approximability:1 martin:2 department:4 ball:6 kd:4 across:1 reconstructing:1 matth:1 intuitively:2 explained:2 dv:6 ln:26 previously:1 turn:1 ordinal:2 flip:1 thorup:1 operation:2 endowed:2 apply:2 observe:4 hierarchical:6 v2:1 pierre:1 yair:1 slower:1 original:3 uopt:2 clustering:7 include:1 krauthgamer:1 graphical:1 log2:24 exploit:2 especially:2 approximating:1 classical:2 society:1 objective:1 question:6 already:1 occurs:1 quantity:2 dependence:6 md:14 evolutionary:2 navigating:1 subspace:2 distance:10 unable:1 philip:1 nx:8 argue:2 trivial:2 reason:1 kannan:1 mini:1 providing:2 demonstration:2 bafna:1 setup:1 phylogenetics:2 taxonomy:4 robert:1 sne:1 stated:2 steel:1 design:1 unknown:1 perform:3 upper:8 observation:1 datasets:2 finite:12 fabian:1 defining:1 grew:1 hinton:1 perturbation:1 arbitrary:2 community:1 namely:3 kl:1 connection:1 barcelona:1 nip:1 address:1 able:5 suggested:1 proceeds:2 below:5 max:19 including:1 treated:1 natural:2 rely:1 improve:2 numerous:1 sn:1 dragan:1 nice:1 literature:5 carlsson:1 asymptotic:1 nxn:5 embedded:1 fully:1 highlight:1 interesting:1 geoffrey:1 ingredient:1 triple:1 foundation:1 incurred:2 s0:2 principle:1 collaboration:1 shieh:1 ata:1 course:1 summary:1 deza:2 alain:1 neighbor:3 taking:1 fifth:1 distributed:1 van:1 dimension:26 xn:19 world:3 evaluating:2 rich:1 gram:1 ending:1 made:2 collection:1 social:1 approximate:1 global:2 handbook:1 assumed:2 xi:15 demo:1 x00:6 search:1 robust:2 inherently:2 symmetry:1 subinterval:1 hc:6 domain:4 da:14 vj:3 main:4 abraham:1 n2:3 allowed:1 body:1 x1:15 referred:1 en:8 fakcharoenphol:1 embeds:1 explicit:1 samir:1 candidate:1 mardia:1 third:2 warnow:1 theorem:33 embed:3 elena:1 showing:1 virtue:3 grouping:2 intrinsic:4 exists:3 airoldi:1 demand:1 cx:3 simply:1 explore:1 visual:1 highlighting:1 ordered:1 ux:17 doubling:19 springer:2 minimizer:1 gromov:38 acm:5 goal:2 formulated:1 diam:14 quantifying:1 replace:1 hard:1 specifically:3 lemma:3 max0:1 bernard:1 duality:10 osu:3 rarely:1 ult:11 interpoint:1 |
6,005 | 6,432 | Exact Recovery of Hard Thresholding Pursuit
Xiao-Tong Yuan
B-DAT Lab
Nanjing University of Info. Sci.&Tech.
Nanjing, Jiangsu, 210044, China
xtyuan@nuist.edu.cn
Ping Li?? Tong Zhang?
?Depart. of Statistics and ?Depart. of CS
Rutgers University
Piscataway, NJ, 08854, USA
{pingli,tzhang}@stat.rutgers.edu
Abstract
The Hard Thresholding Pursuit (HTP) is a class of truncated gradient descent
methods for finding sparse solutions of ?0 -constrained loss minimization problems. The HTP-style methods have been shown to have strong approximation
guarantee and impressive numerical performance in high dimensional statistical
learning applications. However, the current theoretical treatment of these methods has traditionally been restricted to the analysis of parameter estimation consistency. It remains an open problem to analyze the support recovery performance
(a.k.a., sparsistency) of this type of methods for recovering the global minimizer
of the original NP-hard problem. In this paper, we bridge this gap by showing,
for the first time, that exact recovery of the global sparse minimizer is possible
for HTP-style methods under restricted strong condition number bounding conditions. We further show that HTP-style methods are able to recover the support
of certain relaxed sparse solutions without assuming bounded restricted strong
condition number. Numerical results on simulated data confirms our theoretical
predictions.
1
Introduction
In modern high dimensional data analysis tasks, a routinely faced challenge is that the number of
collected samples is substantially smaller than the dimensionality of features. In order to achieve
consistent estimation in such small-sample-large-feature settings, additional assumptions need to
be imposed on the model. Among others, the low-dimensional structure prior is the most popular
assumption made in high dimensional analysis. This structure can often be captured by imposing
sparsity constraint on model space, leading to the following ?0 -constrained minimization problem:
min f (x),
x?Rp
s.t. ?x?0 ? k,
(1)
where f : Rp 7? R is a smooth convex loss function and ?x?0 denotes the number of nonzero
entries in x. Due to the cardinality constraint, Problem (1) is not only non-convex, but also NP-hard
in general (Natarajan, 1995). Thus, it is desirable to develop efficient computational procedures to
approximately solve this problem.
When the loss function is squared regression error, Problem (1) reduces to the compressive sensing
problem (Donoho, 2006) for which a vast body of greedy selection algorithms have been proposed
including orthogonal matching pursuit (OMP) (Pati et al., 1993), compressed sampling matching
pursuit (CoSaMP) (Needell & Tropp, 2009), hard thresholding pursuit (HTP) (Foucart, 2011) and iterative hard thresholding (IHT) (Blumensath & Davies, 2009) to name a few. The greedy algorithms
designed for compressive sensing can usually be generalized to minimize non-quadratic loss functions (Shalev-Shwartz et al., 2010; Yuan & Yan, 2013; Bahmani et al., 2013). Comparing to those
convex-relaxation-based methods (Beck & Teboulle, 2009; Agarwal et al., 2010), these greedy se30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
lection algorithms often exhibit similar accuracy guarantees but more attractive computational efficiency and scalability.
Recently, the HTP/IHT-style methods have gained significant interests and they have been witnessed
to offer the fastest and most scalable solutions in many cases (Yuan et al., 2014; Jain et al., 2014).
The main theme of this class of methods is to iteratively perform gradient descent followed by a
truncation operation to preserve the most significant entries, and an (optional) debiasing operation
to minimize the loss over the selected entries. In (Blumensath, 2013; Yuan et al., 2014), the rate
of convergence and parameter estimation error of HTP/IHT-style methods were established under
proper Restricted Isometry Property (RIP) (or restricted strong condition number) bound conditions.
Jain et al. (2014) presented and analyzed several relaxed variants of HTP/IHT-style algorithms for
which the estimation consistency can be established without requiring the RIP conditions. Very
recently, the extensions of HTP/IHT-style methods to structured and stochastic sparse learning problems have been investigated in (Jain et al., 2016; Li et al., 2016; Shen & Li, 2016).
1.1
An open problem: exact recovery of HTP
In this paper, we are particularly interested in the exact recovery and support recovery performance
of the HTP-style methods. A pseudo-code of HTP is outlined in Algorithm 1 which is also known as
GraHTP in (Yuan et al., 2014). Although this type of methods have been extensively analyzed in the
original paper (Foucart, 2011) for compressive sensing and several recent followup work (Yuan et al.,
2014; Jain et al., 2014, 2016) for generic sparse minimization, the state-of-the-art is only able to derive convergence rates and parameter estimation error bounds for HTP. It remains an open and challenging problem to analyze its ability to exactly recover the global sparse minimizer of Problem (1)
in general settings. Actually, the support/structure recovery analysis is the main challenge in many
important sparsity models including compressive sensing and graphical models learning (Jalali et al.,
2011; Ravikumar et al., 2011): once the support is recovered, computing the actual nonzero coefficients just boils down to solving a convex minimization problem restricted on the supporting set.
Since the output of HTP is always k-sparse, the existing estimation error results in (Foucart, 2011;
Yuan et al., 2014; Jain et al., 2014) naturally imply some support recovery conditions. For example,
for perfect measurements, the results in (Foucart, 2011; Yuan et al., 2014) guarantee that HTP can
exactly recover the underlying true sparse model parameters. For noisy models, roughly speaking,
as long as the smallest (in magnitude) nonzero entry of the k-sparse minimizer of (1) is larger than
the estimation error bound of HTP, an exact recovery of the minimizer can be guaranteed. However,
these pieces of support recovery results implied by the estimation error bound turn out to be loose
when compared to the main results we will derive in the current paper.
Algorithm 1: Hard Thresholding Pursuit.
Input : Loss function f (x), sparsity level k, step-size ?.
Initialization x(0) = 0, t = 1.
Output : x(t) .
repeat
(S1) Compute x
?(t) = x(t?1) ? ??f (x(t?1) );
(t)
(S2) Select F = supp(?
x(t) , k) be the indices of x
?(t) with the largest k absolute values;
(t)
(S3) Compute x = arg min{f (x), supp(x) ? F (t) };
(S4) Update t ? t + 1;
until F (t) = F (t?1) ;
1.2
Overview of our results
The core contribution in this work is a deterministic support recovery analysis of HTP-style methods which to our knowledge has not been systematically conducted elsewhere in literature. Our first
result (see Theorem 1) shows that HTP as described in Algorithm 1 is able to exactly recover the
k-sparse minimizer x? = arg min?x?0 ?k f (x) if x?min , i.e., the smallest non-zero entry of x? , is significantly larger than ??f (x? )?? and certain RIP-type condition can be fulfilled as well. Moreover,
the exact recovery can be guaranteed in finite running of Algorithm 1 with geometric rate of convergence. Our second result (see Theorem 2) shows that the support recovery of an arbitrary k-sparse
2
Table 1: Comparison between our results and several prior results on HTP-style algorithms.
Related Work
Target Solution
RIP Condition Free
Support Recovery
(Foucart, 2011)
True k-sparse signal x
?
?
(Yuan et al., 2014)
Arbitrary x
? with ??
x?0 ? k
?
?
(Jain et al., 2014)
x
? = arg min?x?0 ?k? f (x)
for proper k? ? k
?
Ours
Arbitrary x
? with ??
x?0 ? k
? (for ??
x?0 = k),
?
(for ??
x?0 ? k)
?
?
?
vector x
? can be guaranteed if x
?min is well discriminated from k??f (x? )?? or ??f (x? )?? , pending on the optimality of x
? over its own supporting set. Our third result (see Theorem 3) shows that
HTP is able to recover the support of certain relaxed sparse minimizer x
? with ??
x?0 ? k under
an arbitrary restricted strong condition number. More formally, given the restricted strong smoothness/convexity (see Definition 1) constants M2k and m2k , the recovery of supp(?
x) is possible if
2
k ? (1 + 16M2k
/m22k )k? ?
and the smallest non-zero element in x
? is significantly larger than the
rooted objective value gap f (?
x) ? f (x? ). The support recovery can also be guaranteed in finite
iteration for this case. By specifying our deterministic analysis to least squared regression and logistic regression, we are able to obtain the sparsistency guarantees of HTP for these statistical learning
examples. Monte-Carlo simulation results confirm our theoretical predictions. Table 1 summarizes
a high-level comparison between our work and the state-of-the-art analysis for HTP-style methods.
1.3
Notation and organization
Notation Let x ? Rp be a vector and F be an index set. We denote [x]i the ith entry of vector x, xF
the restriction of x to index set F and xk the restriction of x to the top k (in absolute vale) entries.
The notation supp(x) represents the index set of nonzero entries of x and supp(x, k) represents the
index set of the top k (in absolute vale) entries of x. We conventionally define ?x?? = maxi |[x]i |
and define xmin = mini?supp(x) |[x]i |.
Organization This paper proceeds as follows: In ?2, we analyze the exact recovery performance of
HTP. The applications of our analysis to least squared regression and logistic regression models are
presented in ?3. Monte-Carlo simulation results are reported in ?4. We conclude this paper in ?5.
Due to space limit, all the technical proofs of our results are deferred to an appendix section which
is included in the supplementary material.
2
A Deterministic Exact Recovery Analysis
In this section, we analyze the exact support recovery performance of HTP as outlined in Algorithm 1. In large picture, the theory developed in this section can be decomposed into the following
three ingredients:
? First, we will investigate the support recovery behavior of the global k-sparse minimizer
x? = arg min?x?0 ?k f (x). The related result is summarized in Proposition 1.
? Second, we will present in Theorem 1 the guarantee of HTP for exactly recovering x? .
? Finally, by combining the the above two results we will be able to establish the support recovery result of HTP in Theorem 2. Furthermore, we derive an RIP-condition-free support
recovery result in Theorem 3.
Our analysis relies on the conditions of Restricted Strong Convexity/Smoothness (RSC/RSS) which
are conventionally used in previous analysis for HTP (Yuan et al., 2014; Jain et al., 2014).
Definition 1 (Restricted Strong Convexity/Smoothness). For any integer s > 0, we say f (x) is
restricted ms -strongly convex and Ms -smooth if there exist ms , Ms > 0 such that
ms
Ms
?x ? y?2 ? f (x) ? f (y) ? ??f (y), x ? y? ?
?x ? y?2 , ??x ? y?0 ? s.
(2)
2
2
3
The ratio Ms /ms , which measures the curvature of the loss function over sparse subspaces, will be
referred to as restricted strong condition number in this paper.
2.1
Preliminary: Support recovery of x?
Given a target solution x
?, the following result establishes some sufficient conditions under which x?
is able to exactly recover the supporting set of x
?. A proof of this result is provided in Appendix B
(see the supplementary file).
Proposition 1. Assume that f is M2k -smooth and m2k -strongly convex. Let x
? be an arbitrary
k-sparse vector. Let x
?? = arg minsupp(x)?supp(?x) f (x) and ?l > 0 be a scalar such that
?l
x? ? x
??21 .
f (?
x? ) = f (?
x) + ??f (?
x), x
?? ? x
?? + ??
2
Then we have supp(?
x) = supp(x? ) if either of the following two conditions is satisfied:
(1) x
?min ?
(2) x
?min ?
?
2 2k
x)?? ;
m2k ??f (?
(
?
?
M2k
+
?
2?+2
?
l
)
??f (?
x)?? ,
m2k
M2k
? max
{
? }
?
3
3?+1
,
,
?
2
4?
for some ?? > 1.
Remark 1. The quantity ?l actually measures the strong-convexity of f at the point (?
x? ? x
?) in
?
?1 -norm. From its definition we can verify that l is valued in the interval [m2k /k, M2k ] if x
? ?=
x
?? . The closer ?l is to M2k , the weaker lower bound condition can be imposed on x
?min in the
condition (2). In (Nutini et al, 2015), a similar strong-convexity measurement has been defined
over the entire vector space for refined convergence analysis of the coordinate descent methods.
Different from (Nutini et al, 2015), we only require such an ?1 -norm strong-convexity condition holds
at certain target points of interest. Particularly if x
?=x
?? , i.e., x
? is optimal over its supporting set,
?
then we may simply set l = ? in Proposition 1.
2.2
Main results: Support recovery of HTP
Equipped with Proposition 1, it will be straightforward to guarantee the support recovery of HTP if
we can derive sufficient conditions under which HTP is able to exactly recover x? . Denote F ? =
supp(x? ). Intuitively, x?min should be significantly larger than ??f (x? )?? to attract HTP to be
stuck at x? (see Lemma 5 in Appendix B for a formal elaboration). The exact recovery analysis
also relies on the following quantity ??? which measures the gap between the minimal k-sparse
objective value f (x? ) and the remaining ones over supporting sets other than supp(x? ):
??? := f (x?? ) ? f (x? ),
where x?? = arg min?x?0 ?k,supp(x)?=supp(x? ),f (x)>f (x? ) f (x). Intuitively, the larger ??? is, the
easier and faster x? can be recovered by HTP. It is also reasonable to expect that the step-size ?
should be well bounded away from zero to avoid undesirable early stopping.
Inspired by these intuitive points, we present the following theorem which guarantees the exact
recovery of HTP when the restricted strong condition number is well bounded. A proof of this
theorem is provided in Appendix C (see the supplementary file).
Theorem 1. Assume that f is M2k -smooth and m2k -strongly convex. Assume that ?? :=
M2k x?
m2k
m2k
7?? +1
min
??f (x? )?? > 1 and M2k ?
8?? . If we set the step-size to be ? = M 2 , then the optimal
2k
k-sparse solution x? is unique and HTP will terminate with output x(t) = x? after at most
?
?
3
M2k
?(0)
t=
ln
m22k (M2k ? m2k ) ???
steps of iteration,
where ?(0)
=
f (x(0) ) ? f (x? ) and ???
=
?
min?x?0 ?k,supp(x)?=supp(x? ),f (x)>f (x? ) {f (x) ? f (x )}.
Remark 2. Theorem 1 suggests that HTP is able to exactly recover x? provided that x?min is strictly
larger than ??f (x? )?? /M2k and the restricted strong condition number is well bounded, i.e.,
?
M2k /m2k ? 7?8?? +1 < 1.14.
4
As a consequence of Proposition 1 and Theorem 1, the following theorem establishes the performance of HTP for recovering the support of an arbitrary k-sparse vector. A proof of this result is
provided in Appendix D (see the supplementary file).
Theorem 2. Let x
? be an arbitrary k-sparse vector and ?l be defined in Proposition 1. Assume that
the conditions in Theorem 1 hold. Then HTP will output x(t) satisfying supp(x(t) ) = supp(?
x) in
finite iteration, provided that either of the following two conditions is satisfied in addition:
?
( ?
)
2 2k
?
2?? + 2
(1) x
?min ?
??f (?
x)?? ; (2) x
?min ?
+
??f (?
x)?? .
?l
m2k
M2k
In the following theorem, we further show that for proper k? < k, HTP method is able to recover
?
the support of certain desired k-sparse
vector without assuming bounded restricted strong condition
numbers. A proof of this theorem can be found in Appendix E (see the supplementary file).
?
Theorem 3. Assume that
? be an arbitrary k-sparse
( f is M22k)-smooth and m2k -strongly convex. Let x
16M2k ?
k. Set the step-size to be ? = 2M12k .
vector satisfying k ? 1 + m2
2k
?
(a) If x
?min >
2(f (?
x)?f (x? ))
,
m2k
then HTP will terminate in finite iteration with output x(t)
satisfying supp(?
x) ? supp(x(t) ).
?
)?f (x? ))
(b) Furthermore, if x
?min > 1.62 2(f (?xm
, then HTP will terminate in finite iteration
2k
? = supp(?
with output x(t) satisfying supp(x(t) , k)
x).
Remark 3. The main message conveyed by the part (a) of Theorem
? 3 is: If the nonzero elements
x) ? f (x? ), then supp(?
x) ?
in x
? are significantly larger than the rooted objective value gap f (?
(t)
supp(x ) can be guaranteed by HTP with sufficiently large sparsity level k. Intuitively, the closer
f (?
x) is to f (x? ), the easier the conditions can be satisfied. Given that f (?
x) is close enough to
the unconstrained global minimizer of f (i.e., the global minimizer of f is nearly sparse), we will
have f (?
x) close enough to f (x? ) since f (?
x) ? f (x? ) ? f (?
x) ? minx f (x). In the ideal case
where the sparse vector x
? is an unconstrained minimum of f , we will have f (?
x) = f (x? ), and thus
(t)
supp(?
x) ? supp(x ) holds under arbitrarily large restricted strong condition number.
The part (b) of Theorem 3 shows that under almost identical conditions (up to a slightly increased
numerical constant) to those in Part(a), HTP will output x(t) of which the top k? entries are exactly
?
the supporting set of x
?. The implication of this result is: in order to recover certain k-sparse
signals,
one may run HTP with a properly relaxed sparsity level k until convergence and then preserve the
?
top k? entries of the k-sparse output as the final k-sparse
solution.
2.3
Comparison against prior results
It is interesting to compare our support recovery results with those implied by the parameter estimation error bounds obtained in prior work (Yuan et al., 2014; Jain et al., 2014). Actually, parameter
estimation error bound naturally leads to the so called x-min condition which is key to the support
recovery analysis. For example, it can?
be derived from the bounds in (Yuan et al., 2014) that under
x)?? ) when t is sufficiently large. This implies that
proper RIP condition ?x(t) ? x
?? = O( k??f (?
as long as the x
?min is significantly larger than such an estimation error bound, exact recovery of x
?
?
can be guaranteed. In the meantime,
the
results
in
(Jain
et
al.,
2014)
show
that
for
some
k-sparse
( 2 )
m
minimizer of (1) with k? = O M2k
2 k , it holds for arbitrary restrictive strong condition number that
2k
?
(t)
?x ? x
?? = O( k??f (?
x)?? ) when t is sufficiently large. Provided that x
?min is significantly
larger than such an error bound, it will hold true that supp(?
x) ? supp(x(t) ). Table 2 summarizes
our support recovery results and those implied by the state-of-the-art results regarding target solution, dependency on RIP-type conditions and x-min condition. From this table, we can see that the
x-min condition in Theorem 1 for recovering the global minimizer x? is weaker
?than those implied
in (Yuan et al., 2014) in the sense that the former is not dependent on a factor k. Also our x-min
condition
in Theorem 3 is weaker than those implied in (Jain et?al., 2014) because; 1) our bound
?
x) ? f (x? )) is not explicitly dependent
on a multiplier
k; and 2) it can be verified from
O( f (?
?
?
?
?
the restricted strong-convexity of f that f (?
x) ? f (x ) ? k??f (?
x)?? / 2m2k .
5
Table 2: Comparison between our support recovery conditions and those implied by the existing
estimation error bounds for HTP-style methods.
Results
Target Solution
RIP Cond.
(Yuan et al., 2014)
Arbitrary k-sparse x
?
(
)
m2k 2
??
x?0 = O ( M2k ) k
Required
Theorem 1
?
x = arg min?x?0 ?k f (x)
Required
Theorem 2
Arbitrary k-sparse x
?
)
(
m2k 2
)
k
??
x?0 = O ( M
2k
Required
(Jain et al., 2014)
Theorem 3
Free
Free
X-min Condition
?
x
?min > O( k??f (?
x)?? )
?
x
?min > O( k??f (?
x?? )
x?min > O(??f (x? )?? )
?
x
?min > O( k??f (?
x)?? )
or x
?min > O(??f (?
x)?? )
(?
)
x
?min > O
f (?
x) ? f (x? )
It is also interesting to compare the support recovery result in Proposition 1 with those known for
the following ?1 -regularized estimator:
min f (x) + ??x?1 ,
x?Rp
where ? is the regularization strength parameter. Recently, a unified sparsistency analysis for this
type of convex-relaxed estimator was provided in (Li et al., 2015). We summarize in below a comparison between our Proposition 1 and the state-of-the-art results in (Li et al., 2015) with respect to
several key conditions:
? Local structured smoothness/convexity condition: Our analysis only requires first-order
local structured smoothness/convexity conditions (i.e., RSC/RSS) while the analysis
in (Li et al., 2015, Theorem 5.1, Condition 1) relies on certain second-order and third-order
local structured smoothness conditions.
? Irrepresentablility condition: Our analysis is free of the Irrepresentablility Condition which
is usually required to guarantee the sparsistency of ?1 -regularized estimators (Li et al.,
2015, Theorem 5.1, Condition 3).
? RIP-type condition: The analysis in (Li et al., 2015) is free of RIP-type condition, while
ours is partially relying on such a condition (see Condition (2) of Proposition 1).
? X-min condition: Comparing to the x-min
? condition required in (Li et al., 2015, Theorem
5.1, Condition 4), which is of order O( k??f (?
x)?? ), the x-min condition (1) in Proposition 1 is at the
same
order
while
the
x-min
condition
(2) is weaker as it is not explicitly
?
dependent on k.
3
Applications to Statistical Learning Models
In this section, we apply our support recovery analysis to several sparse statistical learning models, deriving concrete sparsistency conditions in each case. Given a set of n independently drawn
data samples {(u(i) , v (i) )}ni=1 , we are interested in the following sparsity-constrained empirical loss
minimization problem:
1?
?(w? u(i) , v (i) ),
n i=1
n
min f (w) :=
w
subject to ?w?0 ? k.
where ?(?, ?) is a loss function measuring the discrepancy between prediction and response and w is
a set of parameters to be estimated. In the subsequent subsections, we will investigate sparse linear
regression and sparse logistic regression as two popular examples of the above formulation.
3.1
Sparsity-constrained linear regression
?
Given a k-sparse
parameter vector w,
? let us consider the samples are generated according to the
linear model v (i) = w
? ? u(i) + ?(i) where ?(i) are n i.i.d. sub-Gaussian random variables with
6
parameter ?. The sparsity-constrained least squared linear regression model is then given by:
1 ? (i)
?v ? w? u(i) ?2 ,
2n i=1
n
min f (w) =
w
subject to ?w?0 ? k.
(3)
Suppose u(i) are drawn from Gaussian distribution with covariance ?. Then it holds with high
probability that f (w) has RSC constant m2k ? ?min
? O(k)log p/n) and RSS constant M2k ?
( (?)
?
?max (?) + O(k log p/n), and??f (w)?
? ? = O ? log p/n . From Theorem 2 we know that
for(sufficiently large
?min >
) n, if the condition number ?max (?)/?min (?) is well bounded and w
?
O ? k? log p/n , then supp(w)
? can be recovered by HTP after sufficient iteration. Since ?(i)
?n
1
are sub-Gaussian, we have f (w)
? = 2n
??(i) ?2 ? ? 2 holds with high probability. From
i=1
?
?min > 1.62? 2/m2k , then supp(w)
? can be recovered, with high
Theorem 3 we can see that if w
?
probability, by HTP with a sufficiently large sparsity level and a k-sparse
truncation postprocessing.
3.2
Sparsity-constrained logistic regression
Logistic regression is one of the most popular models in statistical learning. In this model the relation
between the random feature vector u ? Rp and its associated random binary label v ? {?1, +1}
is determined by the conditional probability P(v|u; w)
? = exp(2v w
? ? u)/(1 + exp(2v w
? ? u)). Given
(i) (i) n
a set of n independently drawn data samples {(u , v )}i=1 , the sparse logistic regression model
learns the parameters w so as to minimize the logistic log-likelihood over sparsity constraint:
1?
log(1 + exp(?2v (i) w? u(i) )),
n i=1
n
min f (w) =
w
subject to ?w?0 ? k.
(4)
It has been shown in (Bahmani et al., 2013, Corollary 1) that under mild conditions, f (w) has RSC
and RSS with overwhelming probability. Suppose u(i) are
( sub-Gaussian
) with parameter ?, then it
?
is known from (Yuan et al., 2014) that ??f (w)?
? ? = O ? log p/n . Then from Theorem 2 we
( ?
)
know that if the restrictive strong condition number is well bounded and w
?min > O ? k? log p/n ,
then supp(w)
? can be recovered by HTP after sufficient iteration. By using
Theorem 3 )and the fact
( ?
?
?
f (w)
? ? f (w? ) = O( k??f (?
x)?? ), we can show that if w
?min > O ? k? log p/n , then with
high probability, supp(w)
? can be recovered by HTP using a sufficiently large sparsity level k and
proper postprocessing, without assuming bounded sparse condition number.
4
Numerical Results
In this section, we conduct a group of Monte-Carlo simulation experiments on sparse linear regression and sparse logistic regression models to verify our theoretical predictions.
Data generation: We consider a synthetic data model in which the sparse parameter w
? is a p = 500
dimensional vector that has k? = 50 nonzero entries drawn independently from the standard Gaussian
distribution. Each data sample u is a normally distributed dense vector. For the linear regression
model, the responses are generated by v = uw
? + ? where ? is a standard Gaussion noise. For the
logistic regression model, the data labels, v ? {?1, 1}, are then generated randomly according to
the Bernoulli distribution P(v = 1|u; w)
? = exp(2w
? ? u)/(1 + exp(2w
? ? u)). We allow the sample
size n to be varying and for each n, we generate 100 random copies of data independently.
? We use two
Evaluation metric: In our experiment, we test HTP with varying sparsity level k ? k.
metrics to measure the support recovery performance. We say a relaxed support recovery is success?
ful if supp(w)
? ? supp(w(t) ) and an exact support recovery is successful if supp(w)
? = supp(w(t) , k).
We replicate the experiment over the 100 trials and record the percentage of successful relaxed support recovery and percentage of successful exact support recovery for each configuration of (n, k).
Results: Figure 1 shows the percentage of relaxed/exact success curves as functions of sample size
n under varying sparsity level k. From the curves in Figure 1(a) for the linear regression model we
7
k=50
k=70
k=90
k=110
k=130
k=150
40
20
0
80
60
k=50
k=70
k=90
k=110
k=130
k=150
40
20
0
200
400
600
800
100
100
k=50
k=70
k=90
k=110
k=130
k=150
80
60
40
Perc. of exact success (%)
60
Perc. of relaxed success (%)
100
80
Perc. of exact success (%)
Perc. of relaxed success (%)
100
20
0
200
400
n
600
800
k=50
k=70
k=90
k=110
k=130
k=150
80
60
40
20
0
200
400
n
600
800
200
400
n
(a) Linear Regression
600
800
n
(b) Logistic Regression
60
40
20
HTP: k=70
IHT: k=70
0
80
60
40
20
HTP: k=70
IHT: k=70
0
200
400
600
800
100
100
HTP: k=70
IHT: k=70
80
Perc. of exact success (%)
100
80
Perc. of relaxed success (%)
100
Perc. of exact success (%)
Perc. of relaxed success (%)
Figure 1: Chance of relaxed success (left panel) and exact success (right panel) curves for linear
regression and logistic regression models.
60
40
20
0
200
400
n
600
800
60
40
20
0
200
n
HTP: k=70
IHT: k=70
80
400
600
800
200
400
n
(a) Linear Regression
600
800
n
(b) Logistic Regression
Figure 2: HTP versus IHT: Chance of relaxed and exact success of support recovery.
can make two observations: 1) for each curve, the chance of success increases as n increases, which
matches the results in Theorem 1 and Theorem 2; 2) HTP has the best performance when using
? Also it can be seen that the percentage of relaxed success is less sensitive
sparsity level k = 70 > k.
to k than the percentage of exact success. These observations match the prediction in Theorem 3.
Similar observations can be made from the curves in Figure 1(b) for the logistic regression model.
We have also compared HTP with IHT (Blumensath & Davies, 2009) in support recovery performance. Note that IHT is a simplified variant of HTP without the debiasing operation (S3) in Algorithm 1. Our exact support recovery analysis for HTP builds heavily upon such a debiasing operation.
Figure 2 shows the chance of success curves for these two methods with sparsity level k = 70. Figure 2(a) shows that in linear regression model, HTP is superior to IHT when the sample size n is
relatively small and they become comparable as n increases. Figure 2(b) indicates that HTP slightly
outperforms IHT when applied to the considered logistic regression task. From this group of results we can draw the conclusion that the debiasing step of HPT does have significant impact on
improving the support recovery performance especially in small sample size settings.
5
Conclusions
In this paper, we provided a deterministic support recovery analysis for HTP-style methods widely
used in sparse learning. Theorem 1 establishes sufficient conditions for exactly recovering the global
k-sparse minimizer x? of the NP-hard problem (1). Theorem 2 provides sufficient conditions to
guarantee the support recovery of an arbitrary k-sparse target solution. Theorem 3 further shows
that even when the restricted strong condition number can be arbitrarily large, HTP is still able
to recover a target sparse solution by using certain relaxed sparsity level in the algorithm. We
have applied our deterministic analysis to sparse linear regression and sparse logistic regression
to establish the sparsistency of HTP in these statistical learning models. Based on our theoretical
justification and numerical observation, we conclude that HTP-style methods are not only accurate in
parameter estimation, but also powerful for exactly recovering sparse signals even in noisy settings.
Acknowledgments
Xiao-Tong Yuan and Ping Li were partially supported by NSF-Bigdata-1419210, NSF-III-1360971,
ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137. Xiao-Tong Yuan is also partially supported by NSFC-61402232, NSFC-61522308, and NSFJP-BK20141003. Tong Zhang is supported
by NSF-IIS-1407939 and NSF-IIS-1250985.
8
References
Agarwal, A., Negahban, S., and Wainwright, M. Fast global convergence rates of gradient methods for highdimensional statistical recovery. In Proceedings of the 24th Annual Conference on Neural Information
Processing Systems (NIPS?10), 2010.
Bahmani, S., Raj, B., and Boufounos, P. Greedy sparsity-constrained optimization. Journal of Machine Learning Research, 14:807?841, 2013.
Beck, A. and Teboulle, Marc. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
Blumensath, T. Compressed sensing with nonlinear observations and related nonlinear optimization problems.
IEEE Transactions on Information Theory, 59(6):3466?3474, 2013.
Blumensath, T. and Davies, M. E. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265?274, 2009.
Donoho, D. L. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?1306, 2006.
Foucart, S. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM Journal on Numerical
Analysis, 49(6):2543?2563, 2011.
Jain P. and Rao N. and Dhillon I. Structured sparse regression via greedy hard-thresholding. 2016 URL
http://arxiv.org/pdf/1602.06042.pdf.
Jain, P., Tewari, A., and Kar, P. On iterative hard thresholding methods for high-dimensional m-estimation. In
Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS?14), 685?693,
2014.
Jalali, A., Johnson, C. C., and Ravikumar, P. K. On learning discrete graphical models using greedy methods.
In Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS?11), 2011.
Li Xingguo, Zhao Tuo, Arora Raman, Liu Han and Haupt Jarvis. Stochastic variance reduced optimization
for nonconvex sparse learning. In Proceedings of the 33rd International Conference on Machine Learning
(ICML?16), 2016.
Li Yen-Huan, Scarlett Jonathan, Ravikumar Pradeep and Cevher Volkan Sparsistency of ?1 -regularized Mestimators. In Proceedings of the 18th International Conference on Artifficial Intelligence and Statistics
(AISTATS?15), 2015.
Natarajan, B. K. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227?234,
1995.
Needell, D. and Tropp, J. A. Cosamp: iterative signal recovery from incomplete and inaccurate samples. IEEE
Transactions on Information Theory, 26(3):301?321, 2009.
Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, 2004. ISBN 9781402075537.
Nutini, J., Schmidt, M.W., Laradji, I.H., Friedlander, M.P. and Koepke, H.A. Coordinate descent converges
faster with the Gauss-Southwell rule than random selection. In Proceedings of the 32nd International Conference on Machine Learning (ICML?15), pp. 1632?1641, 2015.
Pati, Y.C., Rezaiifar, R., and Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Annual Asilomar Conference
on Signals, Systems, and Computers, pp. 40?44, 1993.
Ravikumar, P., Wainwright, M. J., Raskutti, G., and Yu, B. High-dimensional covariance estimation by minimizing ?1 -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011.
Shalev-Shwartz, Shai, Srebro, Nathan, and Zhang, Tong. Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimization, 20:2807?2832, 2010.
Jie, Shen and Ping, Li.
A tight bound
http://arxiv.org/pdf/1605.01656.pdf.
of
hard
thresholding.
2016.
URL
Yuan, X.-T. and Yan, S. Forward basis selection for pursuing sparse representations over a dictionary. IEEE
Transactions on Pattern Analysis And Machine Intelligence, 35(12):3025?3036, 2013.
Yuan, X.-T., Li, P., and Zhang, T. Gradient hard thresholding pursuit for sparsity-constrained optimization. In
Proceedings of the 31st International Conference on Machine Learning (ICML?14), 2014.
9
| 6432 |@word mild:1 trial:1 determinant:1 norm:2 replicate:1 nd:1 open:3 gaussion:1 confirms:1 simulation:3 r:4 covariance:2 decomposition:1 bahmani:3 configuration:1 liu:1 ours:2 outperforms:1 existing:2 current:2 comparing:2 recovered:6 numerical:6 subsequent:1 designed:1 update:1 greedy:6 selected:1 intelligence:2 xk:1 ith:1 core:1 record:1 fa9550:1 volkan:1 provides:1 org:2 zhang:4 become:1 yuan:19 blumensath:5 introductory:1 lection:1 behavior:1 roughly:1 inspired:1 relying:1 decomposed:1 actual:1 overwhelming:1 equipped:1 cardinality:1 xtyuan:1 spain:1 bounded:8 underlying:1 moreover:1 notation:3 panel:2 provided:8 substantially:1 developed:1 compressive:5 unified:1 finding:1 nj:1 guarantee:9 pseudo:1 ful:1 exactly:10 normally:1 local:3 limit:1 consequence:1 nsfc:2 approximately:1 initialization:1 china:1 specifying:1 challenging:1 suggests:1 fastest:1 unique:1 acknowledgment:1 recursive:1 procedure:1 empirical:1 yan:2 significantly:6 matching:3 davy:3 nanjing:2 undesirable:1 selection:3 close:2 restriction:2 imposed:2 deterministic:5 straightforward:1 independently:4 convex:10 shen:2 recovery:47 needell:2 m2:1 estimator:3 rule:1 deriving:1 traditionally:1 coordinate:2 justification:1 target:7 suppose:2 heavily:1 rip:10 exact:23 element:2 satisfying:4 natarajan:2 particularly:2 xmin:1 convexity:9 nesterov:1 solving:1 tight:1 upon:1 efficiency:1 basis:1 routinely:1 jain:13 fast:2 monte:3 shalev:2 refined:1 larger:9 solve:1 supplementary:5 say:2 valued:1 widely:1 compressed:4 ability:1 statistic:3 noisy:2 final:1 isbn:1 jarvis:1 combining:1 achieve:1 intuitive:1 scalability:1 convergence:6 nuist:1 cosamp:2 perfect:1 converges:1 derive:4 develop:1 stat:1 strong:20 recovering:6 c:1 implies:1 trading:1 stochastic:2 material:1 require:1 preliminary:1 proposition:10 extension:1 strictly:1 hold:7 sufficiently:6 considered:1 exp:5 rezaiifar:1 dictionary:1 early:1 smallest:3 estimation:15 label:2 bridge:1 sensitive:1 largest:1 establishes:3 minimization:5 htp:64 always:1 gaussian:5 avoid:1 shrinkage:1 varying:3 koepke:1 corollary:1 derived:1 properly:1 bernoulli:1 likelihood:1 indicates:1 tech:1 sense:1 dependent:3 stopping:1 attract:1 inaccurate:1 entire:1 relation:1 interested:2 arg:7 among:1 krishnaprasad:1 constrained:8 art:4 once:1 sampling:1 identical:1 represents:2 yu:1 icml:3 nearly:1 discrepancy:1 np:3 others:1 few:1 modern:1 randomly:1 preserve:2 divergence:1 sparsistency:7 beck:2 organization:2 interest:2 message:1 investigate:2 evaluation:1 deferred:1 analyzed:2 pradeep:1 hpt:1 implication:1 accurate:1 closer:2 huan:1 orthogonal:2 conduct:1 incomplete:1 desired:1 theoretical:5 minimal:1 rsc:4 cevher:1 witnessed:1 increased:1 teboulle:2 rao:1 bk20141003:1 measuring:1 entry:12 jiangsu:1 successful:3 conducted:1 johnson:1 reported:1 dependency:1 synthetic:1 st:1 international:4 negahban:1 siam:4 concrete:1 squared:4 satisfied:3 perc:8 zhao:1 style:14 leading:1 li:14 supp:35 summarized:1 coefficient:1 explicitly:2 piece:1 lab:1 analyze:4 recover:11 shai:1 contribution:1 minimize:3 yen:1 ni:1 accuracy:2 variance:1 carlo:3 ping:3 iht:14 definition:3 against:1 pp:2 naturally:2 proof:5 associated:1 boil:1 treatment:1 popular:3 knowledge:1 subsection:1 dimensionality:1 actually:3 response:2 formulation:1 strongly:4 furthermore:2 just:1 until:2 tropp:2 nonlinear:2 logistic:15 name:1 usa:1 requiring:1 true:3 multiplier:1 verify:2 former:1 regularization:1 nonzero:6 iteratively:1 dhillon:1 attractive:1 rooted:2 m:8 generalized:1 pdf:4 postprocessing:2 harmonic:1 recently:3 superior:1 raskutti:1 discriminated:1 overview:1 debiasing:4 kluwer:1 significant:3 measurement:2 imposing:1 smoothness:6 rd:1 unconstrained:2 consistency:2 outlined:2 han:1 impressive:1 curvature:1 isometry:1 recent:1 own:1 raj:1 certain:8 n00014:1 nonconvex:1 kar:1 binary:1 arbitrarily:2 success:17 onr:1 captured:1 minimum:1 additional:1 relaxed:16 seen:1 omp:1 signal:5 ii:2 desirable:1 reduces:1 smooth:5 technical:1 xf:1 faster:2 match:2 offer:1 long:2 elaboration:1 ravikumar:4 impact:1 prediction:5 scalable:1 regression:29 variant:2 basic:1 metric:2 rutgers:2 arxiv:2 iteration:7 agarwal:2 addition:1 interval:1 file:4 subject:3 integer:1 ideal:1 iii:1 enough:2 followup:1 regarding:1 cn:1 url:2 speaking:1 remark:3 jie:1 tewari:1 s4:1 extensively:1 reduced:1 generate:1 http:2 exist:1 percentage:5 nsf:4 s3:2 fulfilled:1 estimated:1 scarlett:1 discrete:1 group:2 key:2 drawn:4 verified:1 uw:1 vast:1 imaging:1 relaxation:1 run:1 inverse:1 powerful:1 tzhang:1 almost:1 reasonable:1 electronic:1 pursuing:1 draw:1 raman:1 summarizes:2 appendix:6 comparable:1 bound:13 followed:1 guaranteed:6 quadratic:1 annual:4 strength:1 constraint:4 mestimators:1 nathan:1 min:47 optimality:1 relatively:1 xingguo:1 structured:5 according:2 piscataway:1 smaller:1 slightly:2 s1:1 intuitively:3 restricted:18 southwell:1 asilomar:1 ln:1 remains:2 turn:1 loose:1 know:2 pursuit:9 operation:4 apply:1 away:1 generic:1 m2k:37 schmidt:1 rp:5 original:2 denotes:1 running:1 top:4 remaining:1 graphical:2 restrictive:2 build:1 establish:2 especially:1 dat:1 implied:6 objective:3 pingli:1 quantity:2 depart:2 jalali:2 exhibit:1 gradient:4 minx:1 subspace:1 sci:1 simulated:1 collected:1 assuming:3 code:1 index:5 pati:2 mini:1 ratio:1 minimizing:1 info:1 proper:5 perform:1 vale:2 observation:5 finite:5 descent:4 truncated:1 optional:1 supporting:6 arbitrary:12 tuo:1 required:5 established:2 barcelona:1 nip:4 able:11 proceeds:1 usually:2 below:1 xm:1 pattern:1 sparsity:21 challenge:2 summarize:1 including:2 max:3 wainwright:2 regularized:3 meantime:1 imply:1 picture:1 arora:1 conventionally:2 faced:1 prior:4 literature:1 geometric:1 friedlander:1 afosr:1 loss:9 expect:1 haupt:1 lecture:1 interesting:2 generation:1 srebro:1 versus:1 ingredient:1 conveyed:1 sufficient:6 consistent:1 xiao:3 thresholding:12 systematically:1 elsewhere:1 course:1 penalized:1 repeat:1 supported:3 truncation:2 free:6 copy:1 formal:1 weaker:4 allow:1 absolute:3 sparse:51 distributed:1 curve:6 stuck:1 made:2 forward:1 simplified:1 transaction:4 approximate:1 confirm:1 global:9 conclude:2 shwartz:2 iterative:5 table:5 terminate:3 pending:1 improving:1 investigated:1 marc:1 aistats:1 main:5 dense:1 bounding:1 s2:1 noise:1 body:1 referred:1 tong:6 sub:3 theme:1 third:2 learns:1 wavelet:1 nsfjp:1 down:1 theorem:37 showing:1 sensing:8 foucart:6 maxi:1 gained:1 magnitude:1 gap:4 easier:2 simply:1 partially:3 scalar:1 nutini:3 minimizer:13 chance:4 relies:3 conditional:1 donoho:2 hard:14 included:1 determined:1 laradji:1 lemma:1 boufounos:1 called:1 gauss:1 cond:1 select:1 formally:1 highdimensional:1 support:38 jonathan:1 bigdata:1 |
6,006 | 6,433 | Spatiotemporal Residual Networks
for Video Action Recognition
Christoph Feichtenhofer
Graz University of Technology
Axel Pinz
Graz University of Technology
Richard P. Wildes
York University, Toronto
feichtenhofer@tugraz.at
axel.pinz@tugraz.at
wildes@cse.yorku.ca
Abstract
Two-stream Convolutional Networks (ConvNets) have shown strong performance
for human action recognition in videos. Recently, Residual Networks (ResNets)
have arisen as a new technique to train extremely deep architectures. In this paper,
we introduce spatiotemporal ResNets as a combination of these two approaches.
Our novel architecture generalizes ResNets for the spatiotemporal domain by
introducing residual connections in two ways. First, we inject residual connections
between the appearance and motion pathways of a two-stream architecture to
allow spatiotemporal interaction between the two streams. Second, we transform
pretrained image ConvNets into spatiotemporal networks by equipping them with
learnable convolutional filters that are initialized as temporal residual connections
and operate on adjacent feature maps in time. This approach slowly increases the
spatiotemporal receptive field as the depth of the model increases and naturally
integrates image ConvNet design principles. The whole model is trained end-to-end
to allow hierarchical learning of complex spatiotemporal features. We evaluate our
novel spatiotemporal ResNet using two widely used action recognition benchmarks
where it exceeds the previous state-of-the-art.
1
Introduction
Action recognition in video is an intensively researched area, with many recent approaches focused
on application of Convolutional Networks (ConvNets) to this task, e.g. [13, 20, 26]. As actions can
be understood as spatiotemporal objects, researchers have investigated carrying spatial recognition
principles over to the temporal domain by learning local spatiotemporal filters [13, 25, 26]. However,
since the temporal domain arguably is fundamentally different from the spatial one, different treatment
of these dimensions has been considered, e.g. by incorporating optical flow networks [20], or
modelling temporal sequences in recurrent architectures [4, 18, 19].
Since the introduction of the ?AlexNet? architecture [14] in the 2012 ImageNet competition, ConvNets
have dominated state-of-the-art performance across a variety of computer vision tasks, including
object-detection, image segmentation, image classification, face recognition, human pose estimation
and tracking. In conjunction with these advances as well as the evolution of network architectures,
several design best practices have emerged [8, 21, 23, 24]. First, information bottlenecks should be
avoided and the representation size should gently decrease from the input to the output as the number
of feature channels increases with the depth of the network. Second, the receptive field at the end of
the network should be large enough that the processing units can base operations on larger regions of
the input. This functionality can be achieved by stacking many small filters or using large filters in the
network; notably, the first choice can be implemented with fewer operations (faster, fewer parameters)
and also allows inclusion of more nonlinearities. Third, dimensionality reduction (1?1 convolutions)
before spatially aggregating filters (e.g. 3?3) is supported by the fact that outputs of neighbouring
filters are highly correlated and therefore these activations can be reduced before aggregation [23].
Fourth, spatial factorization into asymmetric filters can even further reduce computational cost and
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
+
+
+
+
+
+
+
loss
conv5_x
loss
conv5_x
conv4_x
conv3_x
conv4_x
conv3_x
conv2_x
conv1
+
Motion Stream
Appearance Stream
conv2_x
conv1
Figure 1: Our method introduces residual connections in a two-stream ConvNet model [20]. The two
networks separately capture spatial (appearance) and temporal (motion) information to recognize the
input sequences. We do not use residuals from the spatial into the temporal stream as this would bias
both losses towards appearance information.
ease the learning problem. Fifth, it is important to normalize the responses of each feature channel
within a batch to reduce internal covariate shift [11]. The last architectural guideline is to use residual
connections to facilitate training of very deep models that are essential for good performance [8]. We
carry over these good practices for designing ConvNets in the image domain to the video domain
by converting the 1?1 convolutional dimensionality mapping filters in ResNets to temporal filters.
By stacking several of these transformed temporal filters throughout the network we provide a large
receptive field for the discriminative units at the end of the network. Further, this design allows us
to convert spatial ConvNets into spatiotemporal models and thereby exploits the large amount of
training data from image datasets such as ImageNet.
We build on the two-stream approach [20] that employs two separate ConvNet streams, a spatial
appearance stream, which achieves state-of-the-art action recognition from RGB images and a
temporal motion stream, which operates on optical flow information. The two-stream architecture
is inspired by the two-stream hypothesis from neuroscience [6] that postulates two pathways in
the visual cortex: The ventral pathway, which responds to spatial features such as shape or colour
of objects, and the dorsal pathway, which is sensitive to object transformations and their spatial
relationship, as e.g. caused by motion. We extend two-stream ConvNets in the following ways.
First, motivated by the recent success of residual networks (ResNets) [8] for numerous challenging
recognition tasks on datasets such as ImageNet and MS COCO, we apply ResNets to the task of
human action recognition in videos. Here, we initialize our model with pre-trained ResNets for image
categorization [8] to leverage a large amount of image-based training data for the action recognition
task in video. Second, we demonstrate that injecting residual connections between the two streams
(see Fig. 1) and jointly fine-tuning the resulting model achieves improved performance over the
two-stream architecture. Third, we overcome limited temporal receptive field size in the original
two-stream approach by extending the model over time. We convert convolutional dimensionality
mapping filters to temporal filters that provide the network with learnable residual connections over
time. By stacking several of these temporal filters and sampling the input sequence at large temporal
strides (i.e. skipping frames), we enable the network to operate over large temporal extents of the
input. To demonstrate the benefits of our proposed spatiotemporal ResNet architecture, it has been
evaluated on two standard action recognition benchmarks where it greatly boosts the state-of-the-art.
2
Related work
Approaches for action recognition in video can largely be divided into two categories: Those that use
hand-crafted features with decoupled classifiers and those that jointly learn features and classifier.
Our work is related to the latter, which is outlined in the following.
Several approaches have been presented for spatiotemporal feature learning. Unsupervised learning
techniques have been applied by stacking ISA or convolutional gated RBMs to learn spatiotemporal
features for action recognition [16, 25]. In other work, spatiotemporal features are learned by
extending 2D ConvNets into time by stacking consecutive video frames [12]. Yet another study
compared several approaches to extending ConvNets into the temporal domain, but with rather
disappointing results [13]: The architectures were not particularly sensitive to temporal modelling,
2
with a slow fusion model performing slightly better than early and late fusion alternatives; moreover,
similar levels of performance were achieved by a purely spatial network. The recently proposed C3D
approach learns 3D ConvNets on a limited temporal support of 16 frames and all filter kernels having
size 3?3?3 [26]. The network structure is similar to earlier deep spatial networks [21].
Another research branch has investigated combining image information in network architectures
across longer time periods. A comparison of temporal pooling architectures suggested that temporal
pooling of convolutional layers performs better than slow, local, or late pooling, as well as temporal
convolution [18]. That work also considered ordered sequence modelling, which feeds ConvNet
features into a recurrent network with Long Short-Term Memory (LSTM) cells. Using LSTMs,
however, did not yield an improvement over temporal pooling of convolutional features. Other work
trained an LSTM on human skeleton sequences to regularize another LSTM that uses an Inception
network for frame-level descriptor input [17]. Yet other work uses a multilayer LSTM to let the
model attend to relevant spatial parts in the input frames [19]. Further, the inner product of a recurrent
model has been replaced with a 2D convolution and thereby converts the fully connected hidden
layers in a GRU-RNN to 2D convolutional operations [1]. That approach takes advantage of the local
spatial similarity in images; however, it only yields a minor increase over their baseline, which is a
two-stream VGG-16 ConvNet [21] used as the input to their convolutional RNN. Finally, three recent
approaches for action recognition apply ConvNets as follows: In [2] dynamic images are created
by weighted averaging of video frames over time; [31] captures the transformation of ConvNet
features from the beginning to the end of the video with a Siamese architecture; and [5] introduces a
spatiotemporal convolutional fusion layer between the streams of a two-stream architecture.
Notably, the most closely related work to ours (and to several of those above) is the two-stream
ConvNet architecture [20]. That approach first decomposes video into spatial and temporal components by using RGB and optical flow frames. These components are fed into separate deep ConvNet
architectures to learn spatial as well as temporal information about the appearance and movement
of the objects in a scene. Each stream initially performs video recognition on its own and for final
classification, softmax scores are combined by late fusion. To date, this approach is the most effective
approach of applying deep learning to action recognition, especially with limited training data. In
our work we directly convert image ConvNets into 3D architectures and show greatly improved
performance over the two-stream baseline.
3
3.1
Technical approach
Two-Stream residual networks
As our base representation we use deep ResNets [8, 9]. These networks are designed similarly
to the VGG networks [21], with small 3?3 spatial filters (except at the first layer), and similar to
the Inception networks [23], with 1?1 filters for learned dimensionality reduction and expansion.
The network sees an input of size 224?224 that is reduced five times in the network by stride 2
convolutions followed by a global average pooling layer of the final 7?7 feature map and a fullyconnected classification layer with softmax. Each time the spatial size of the feature map changes,
the number of features is doubled to avoid tight bottlenecks. Batch normalization [11] and ReLU
[14] are applied after each convolution; the network does not use hidden fc, dropout, or max-pooling
(except immediately after the first layer). The residual units are defined as [8, 9]:
xl+1 = f (xl + F(xl ; Wl )) ,
(1)
where xl and xl+1 are input and output of the l-th layer, F is a nonlinear residual mapping represented
by convolutional filter weights Wl = {Wl,k |1?k?K } with K ? {2, 3} and f ? ReLU [9]. A key
advantage of residual units is that their skip connections allow direct signal propagation from the first
to the last layer of the network. Especially during backpropagation this arrangement is advantageous:
Gradients are propagated directly from the loss layer to any previous layer while skipping intermediate
weight layers that have potential to trigger vanishing or deterioration of the gradient signal.
We also leverage the two-stream architecture [20]. For both streams, we use the ResNet-50 model [8]
pretrained on the ImageNet dataset and replace the last (classifiation) layer according to the number
of classes in the target dataset. The filters in the first layer of the motion stream are further modified
by replicating the three RGB filter channels to a size of 2L = 20 for operating over the horizontal
and vertical optical flow stacks, each of which has a stack of L = 10 frames. This tack allows us to
exploit the availability of a large amount of annotated training data for both streams.
3
+
+
+
+
+
3x3x1 x 512
ReLU
1x1x1 x 512
ReLU
+
1x1x1 x 2048
conv5_3
ReLU
1x1x3 x 2048
3x3x1 x 512
ReLU
1x1x3 x 512
ReLU
1x1x1 x 2048
3x3x1 x 512
ReLU
1x1x1 x 512
ReLU
ReLU
1x1x1 x 2048
ReLU
3x3x1 x 512
ReLU
1x1x1 x 512
ReLU
1x1x3 x 2048
ReLU
conv5_2
conv5_1
Appearance
Stream
3x3x1 x 512
ReLU
ReLU
1x1x3 x 512
1x1x1 x 2048
ReLU
3x3x1 x 512
ReLU
1x1x1 x 512
ReLU
Motion
Stream
+
1x1x1 x 2048
1x1x1 x 2048
Figure 2: The conv5_x residual units of our architecture. A residual connection (highlighted in red)
between the two streams enables motion interactions. The second residual unit, conv5_2 also includes
temporal convolutions (highlighted in green) for learning abstract spacetime features.
A drawback of the two-stream architecture is that it is unable to spatiotemporally register appearance
and motion information. Thus, it is not able to represent what (captured by the spatial stream) moves
in which way (captured by the temporal stream). Here, we remedy this deficiency by letting the
network learn such spatiotemporal cues at several spatiotemporal scales. We enable this interaction
by introducing residual connections between the two streams. Just as there can be various types
of shortcut connections in a ResNet, there are several ways the two streams can be connected. In
preliminary experiments we found that direct connections between identical layers of the two streams
led to an increase in validation error. Similarly, bidirectional connections increased the validation
error significantly. We conjecture that these results are due to the large change that the signal of
one network stream undergoes after injecting a fusion signal from the other stream. Therefore, we
developed a more subtle alternative solution based on additive interactions, as follows.
Motion Residuals. We inject a skip connection from the motion stream to the appearance stream?s
residual unit. To enable learning of spatiotemporal features at all possible scales, this modification
is applied before the second residual unit at each spatial resolution of the network (indicated by
?skip-stream? in Table 1), as exemplified by the connection at the conv5_x layers in Fig. 2. Formally,
the corresponding appearance stream?s residual units (1) are modified according to
a
x
?al+1 = f (xal ) + F xal + f (xm
(2)
l ), Wl ,
where xal is the input of the l-th layer appearance stream, xm
l the input of the l-th layer motion stream
and Wla are the weights of the l-th layer residual unit in the appearance stream. For the gradient on
the loss function L in the backward pass the chain rule yields
xal+1
?f (xal )
?L
?L ??
?L
? a
m
a
=
=
+
F
x
+
f
(x
),
W
(3)
l
l
l
?xal
??
xal+1 ?xal
??
xal+1
?xal
?xal
for the appearance stream and similarly for the motion stream
?L
?L ?xm
?L ? a
l+1
m
a
=
+
(4)
a
a F xl + f (xl ), Wl ,
m
m
m
?xl
?xl+1 ?xl
??
xl+1 ?xl
where the first additive term of (4) is the gradient at the l-th layer in the motion stream and the second
term accumulates gradients from the appearance stream. Thus, the residual connection between the
streams backpropagates gradients from the appearance stream into the motion stream.
3.2
Convolutional residual connections across time
Spatiotemporal coherence is an important cue when working with time varying visual data and can
be exploited to learn general representations from video in an unsupervised manner [7]. In that case,
temporal smoothness is an important property and is enforced by requiring features to vary slowly
with respect to time. Further, one can expect that in many cases a ConvNet is capturing similar
features across time. For example, an action with repetitive motion patterns such as ?Hammering?
would trigger similar features for the appearance and motion stream over time. For such cases
the use of temporal residual connections would make perfect sense. However, for cases where the
4
?
Time t
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv1
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
conv2_x
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
conv_3x
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
conv_4x
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
conv_5x
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
pool
fc
Figure 3: The temporal receptive field of a single neuron at the fifth meta layer of our motion network
stream is highlighted. ? indicates the temporal stride between inputs. The outputs of conv5_3 are
max-pooled in time and fed to the fully connected layer of our ST-ResNet*.
appearance or the instantaneous motion pattern varies over time, a residual connection would be
suboptimal for discriminative learning, since the sum operation corresponds to a low-pass filtering
over time and would smooth out potentially important high-frequency temporal variation of the
features. Moreover, backpropagation is unable to compensate for that deficit since at a sum layer all
gradients are distributed equally from output to input connections.
Based on the above observations, we developed a novel approach to temporal residual connections
that builds on the ConvNet design guidelines of chaining small [21] asymmetric [10, 23] filters, noted
in Sec. 1. We extend the ResNet architecture with temporal convolutions by transforming spatial
dimensionality mapping filters in the residual paths to temporal filters. This allows the straightforward
use of standard two-stream ConvNets that have been pre-trained on large-scale datasets e.g. to leverage
the massive amounts of training data from the ImageNet challenge. We initialize the temporal weights
as residual connections across time and let the network learn to best discriminate image dynamics
via backpropagation. We achieve this by replicating the learned spatial 1?1 dimensionality mapping
kernels in pretrained ResNets across time. Given the pretrained spatial weights, wl ? R1?1?C ,
0
temporal filters,w
? l ? R1?1?T ?C , are initialized according to
wl (i, j, c)
w
? l (i, j, t, c) =
, ?t ? [1, T 0 ],
(5)
T0
0
and subsequently refined via backpropagation. In (5), division by T serves to average feature
responses across time. We transform filters from both the motion and the appearance ResNets
accordingly. Hence, the temporal filters are able to learn the temporal evolution of the appearance
and motion features and, moreover, by stacking such filters as the depth of the network increases
complex spatiotemporal relationships can be modelled.
3.3
Proposed architecture
Our overall architecture (used for each stream) is summarized in Table 1. The underlying network
used is a 50 layer ResNet [8]. Each filtering operation is followed by batch normalization [11] and
halfway rectification (ReLU). In the columns we show ?metalayers? which share the same output size.
From left to right, top to bottom, the first row shows the convolutional and pooling building blocks,
with the filter and pooling size shown as (W ? H ? T, C), denoting width, height, temporal extent
and number of feature channels, resp. Brackets outline residual units equipped with skip connections.
In the last two rows we show the output size of these metalayers as well as the receptive field on
which they operate. One observes that the temporal receptive field is modulated by the temporal stride
? between the input chunks. For example, if the stride is set to ? = 15 frames, a unit at conv5_3 sees
a window of 17 ? 15 = 255 frames on the input video; see. Fig. 3. The pool5 layer receives multiple
spatiotemporal features, where the spatial 7 ? 7 features are averaged as in [8] and the temporal
features are max-pooled within a window of 5, with each of these seeing a window of 705 frames at
the input. The pool5 output is classified by a fully connected layer of size 1 ? 1 ? 1 ? 2048; note
that this passes several temporally max-pooled chunks to the softmax log-loss layer afterwards. For
videos with less than 705 frames we reduce the stride between temporal inputs and for extremely
short videos we symmetrically pad the input over time.
Sub-batch normalization. Batch normalization [11] subtracts from all activations the batchwise
mean and divides by their variance. These moments are estimated by averaging over spatial locations
and multiple images in the batch. After batch normalization a learned, channel-specific affine
transformation (scaling and bias) is applied. The noisy bias/variance estimation replaces the need
5
Layers
conv1
pool1
conv2_x
conv3_x
conv4_x
conv5_x
# "
#
"
# "
#
1?1?1, 64
1?1?1, 128
1?1?1, 256
1?1?1, 512
3?3?1, 64
3?3?1, 128
3?3?1, 256
3?3?1, 512
1?1?1, 256
1?1?1, 512
1?1?1, 1024
1?1?1, 2048
skip-stream
skip-stream
skip-stream
skip-stream
3?
"
#
"
#
"
#
"
#
3?
1?1?3, 64
1?1?3, 128
1?1?3, 256
1?1?3, 512
str 1 m
3?3?1,
64
3?3?1,
128
3?3?1,
256
3?3?1,
512
id ax
e2
1?1?3, 256
1?1?3, 512
1?1?3, 1024
1?1?3, 2048
"
# "
#
"
#
"
#
1?1?1, 64
1?1?1, 128
1?1?1, 256
1?1?1, 512
3?3?1, 64
3?3?1, 128 ?2
3?3?1, 256 ?4
3?3?1, 512
1?1?1, 256
1?1?1, 512
1?1?1, 1024
1?1?1, 2048
pool5
"
Blocks
7?
7?
1,
6
4
Output
112?112?11 56?56?11
size
Recept.
7?7?1
11?11?1
Field
7?
7?
1a
vg
1?
1?
5
str m
id ax
e2
56?56?11
28?28?11
14?14?11
7?7?11
1?1? 4
35?35?5?
99?99?9?
291?291?13?
483?483?17?
675 ? 675? 47?
Table 1: Spatiotemporal ResNet architecture used in both ConvNet streams. The metalayers are shown
in the columns with their building blocks showing the convolutional filter dimensions (W ?H ?T, C)
in brackets. Each building block shown in brackets also has a skip connection to the block below
and skip-stream denotes a residual connection from the motion to the appearance stream, e.g., see
Fig. 2 for the conv5_2 building block. Stride 2 downsampling is performed by conv1, pool1, conv3_1,
conv4_1 and conv5_1. The output and receptive field size of these layers is shown below. For both
streams, the pool5 layer is followed by a 1 ? 1 ? 1 ? 2048 fully connected layer, a softmax and a loss.
for dropout regularization [8, 24]. We found that lowering the number of samples used for batch
normalization can further improve the generalization performance of the model. For example, for the
appearance stream we use a low batch size of 4 for moment estimation during training. This practice
strongly supports generalization of the model and nontrivially increases validation accuracy (?4% on
UCF101). Interestingly, in comparison to this approach, using dropout after the classification layer
(e.g. as in [24]) decreased validation accuracy of the appearance stream. Note that only the batchsize
for normalizing the activations is reduced; the batch size in stochastic gradient descent is unchanged.
3.4
Model training and evaluation
Our method has been implemented in MatConvNet [28] and we share our code and models at
https://github.com/feichtenhofer/st-resnet. We train our model in three optimization steps with the
parameters listed in Table 2.
Training phase
Motion stream
Appearance stream
ST-ResNet
ST-ResNet*
SGD
batch size
256
256
128
128
Bnorm
batch size
86
8
4
4
Learning Rate (#Iterations)
10?2 (30K), 10?3 (10K)
10?2 (10K), 10?3 (10K)
10?3 (30K), 10?4 (30K), 10?5 (20K)
10?4 (2K), 10?5 (2K)
Temporal
chunks / stride ?
1/? =1
1/? =1
5 / ? ? [5, 15]
11 / ? ? [1, 15]
Table 2: Parameters for the three training phases of our model
Motion and appearance streams. First, each stream is trained similar to [20] using Stochastic
Gradient Descent (SGD) with momentum of 0.9. We rescale all videos by keeping the aspect ratio
and resizing the smallest side of a frame to 256. The motion network uses optical flow stacking
with L = 10 frames and is trained for 30K iterations with a learning rate of 10?2 followed by 10K
iterations at a learning rate of 10?3 . At each iteration, a batch of 256 samples is constructed by
randomly sampling a single optical flow stack from a video; however, for batch normalization [11],
we only use 86 samples to facilitate generalization. We precompute optical flow [32] before training
and store the flow fields as JPEGs (with displacement vectors > 20 pixels clipped). During training,
we use the same augmentations as in [1, 31]; i.e. randomly cropping from the borders and centre of
the flow stack and sampling the width and height of each crop randomly within 256, 224, 192, 168,
following by resizing to 224 ? 224. The appearance stream is trained identically with a batch of
256 RGB frames and learning rate of 10?2 for 10K iterations, followed by 10?3 for another 10K
iterations. Notably here we choose a very small batch size of 8 for normalization. We also apply
random cropping and scale augmentations: We randomly jitter the width and height of the 224 ? 224
input frame by ?25% and also randomly crop it from a maximum of 25% distance from the image
borders. The cropped patch is rescaled to 224 ? 224 and passed as input to the network. The same
rescaling and cropping technique is chosen to train the next two steps described below. In all our
training steps we use random horizontal flipping and do not apply RGB colour jittering [14].
ST-ResNet. Second, to train our spatiotemporal ResNet we sample 5 inputs from a video with
random temporal stride between 5 and 15 frames. This technique can be thought of as frame-rate
jittering for the temporal convolutional layers and is important to reduce overfitting of the final model.
6
SGD is used with a batch size of 128 videos where 5 temporal chunks are extracted from each.
Batch-normalization uses a smaller batch size of 128/32 = 4. The learning rate is set to 10?3 and is
reduced by a factor of 10 after 30K iterations. Notably, there is no pooling over time, which leads to
temporal fully convolutional training with a single loss for each of the 5 inputs and both streams. We
found that this strategy significantly reduces the training duration with the drawback that each loss
does not capture all available information. We overcome this by the next training step.
ST-ResNet*. For our final model, we equip the spatiotemporal ResNet with a temporal max-pooling
layer after pool5 (see Table 1, temporal average pooling led to inferior results) and continue training
as above with the learning rate starting from 10?4 for 2K iterations followed by 10?5 . As indicated
in Table 2, we now use 11 temporal chunks as input with the stride ? between these being randomly
chosen from [1, 15].
Fully convolutional inference. For fair comparison, we follow the evaluation procedure of the
original two-stream work [20] by sampling 25 frames (and their horizontal flips). However, rather
than using 10 spatial 224 ? 224 crops from each of the frames, we apply fully convolutional testing
both spatially (smallest side rescaled to 256) and temporally (the 25 frame-chunks) by classifying the
video in a single forward pass, which takes ?250ms on a Titan X GPU. For inference, we average
the predictions of the fully connected layers (without softmax) over all spatiotemporal locations.
4
Evaluation
We evaluate our approach on two challenging action recognition datasets. First, we consider UCF101
[22], which consists of 13320 videos showing 101 action classes. It provides large diversity in terms
of actions, variations in background, illumination, camera motion and viewpoint, as well as object
appearance, scale and pose. Second, we consider HMDB51 [15], which has 6766 videos that show 51
different actions and generally is considered more challenging than UCF0101 due to the even wider
variations in which actions occur. For both datasets, we use the provided evaluation protocol and
report mean average accuracy over three splits into training and test sets.
4.1
Two-Stream ResNet with additive interactions
Table 3 shows the results of our two-stream architecture across the three training stages outlined
in Sec. 3.4. For stream fusion, we always average the (non-softmaxed) prediction scores of the
classification layer as this approach produces better results than averaging the softmax scores. Initially,
let us consider the performance of the two streams, both initialized with ResNet50 models trained on
ImageNet [8], but without cross-stream residual connections (2) and temporal convolutional layers
(5). The accuracies for UCF101 and HMDB51 are 89.47% and 60.59%, (our HMDB51 motion stream
is initialized from the UCF101 model). Comparatively, a VGG16 two-stream architecture produces
91.4% and 58.5% [1, 31]. In comparing these results it is notable that the VGG16 architecture is
more computationally demanding (19.6 vs. 3.8 billion multiply-add FLOPs ) and also holds more
model parameters (135M vs. 34M) than a ResNet50 model.
Dataset
UCF101
HMDB51
Appearance stream
82.29%
43.42%
Motion stream
79.05%
55.47%
Two-Streams
89.47%
60.59%
ST-ResNet
92.76%
65.57%
ST-ResNet*
93.46%
66.41%
Table 3: Classification accuracy on UCF101 and HMDB51 in the three training stages of our model.
We now consider our proposed spatiotemporal ResNet (ST-ResNet), which is initialized by our twostream ResNet50 model of above and subsequently equipped with 4 residual connections between the
streams and 16 transformed temporal convolution layers (initialized as averaging filters). The model
is trained end-to-end with the loss layers unchanged (we found that using a single, joint softmax
classifier overfits severely to appearance information) and learning parameters chosen as in Table 2.
The results are shown in the penultimate column of Table 3. Our architecture significantly improves
over the two-stream baseline indicating the importance of residual connections between the streams
as well as temporal convolutional connections over time. Interestingly, research in neuroscience
also suggests that the human visual cortex is equipped with connections between the dorsal and the
ventral stream to distribute motion information to separate visual areas [3, 27]. Finally, in the last
column of Table 3 we show results for our ST-ResNet* architecture that is further equipped with a
temporal max-pooling layer to consider larger temporal windows in training and testing. For training
ST-ResNet* we use 11 temporal chunks at the input and the max-pooling layer pools over 5 chunks
to expand the temporal receptive field at the loss layer to a maximum of 705 frames at the input. For
7
testing, where the network sees 25 temporal chunks, we observe that this long-term pooling further
improves accuracy over our ST-ResNet by around 1% on both datasets.
4.2
Comparison with the state-of-the-art
We compare to the state-of-the-art in action recognition over all three splits of UCF101 and HMDB51
in Table 4 (left). We use ST-ResNet*, as above, and predict the videos in a single forward pass using
fully convolutional testing. When comparing to the original two-stream method [20], we improve by
5.4% on UCF101 and by 7% on HMDB51. Apparently, even though the original two-stream approach
has the advantage of multitask learning (HMDB51) and SVM fusion, the benefits of our deeper
architecture with its cross-stream residual connections are greater. Another interesting comparison
is against the two-stream network in [18], which attaches an LSTM to a two-stream Inception [23]
architecture. Their accuracy of 88.6% is to date the best performing approach using LSTMs for action
recognition. Here, our gain of 4.8% further underlines the importance of our architectural choices.
Method
Two-Stream ConvNet [20]
Two-Stream+LSTM[18]
Two-Stream (VGG16) [1, 31]
Transformations[31]
Two-Stream Fusion[5]
ST-ResNet*
UCF101
88.0%
88.6%
91.4%
92.4%
92.5%
93.4%
HMDB51
59.4%
58.5%
62.0%
65.4%
66.4%
Method
IDT [29]
C3D + IDT [26]
TDD + IDT [30]
Dynamic Image Networks + IDT [2]
Two-Stream Fusion[5]
ST-ResNet* + IDT
UCF101
86.4%
90.4%
91.5%
89.1%
93.5%
94.6%
HMDB51
61.7%
65.9%
65.2%
69.2%
70.3%
Table 4: Mean classification accuracy of the state-of-the-art on HMDB51 and UCF101 for the best
ConvNet approaches (left) and methods that additionally use IDT features (right). Our ST-ResNet
obtains best performance on both datasets.
The Transformations [31] method captures the transformation from start to finish of a video by
using two VGG16 Siamese streams (that do not share model parameters, i.e. 4 VGG16 models) to
discriminatively learn a transformation matrix. This method uses considerably more parameters
than our approach, yet is readily outperformed by ours. When comparing with the previously best
performing approach [5], we observe that our method provides a consistent performance gain of
around 1% on both datasets.
The combination of ConvNet methods with trajectory-based hand-crafted IDT features [29] typically
boosts performance nontrivially [2, 26]. Therefore, we further explore the benefits of adding trajectory
features to our approach. We achieve this by simply averaging the L2-normalized SVM scores of the
FV-encoded IDT descriptors (i.e. HOG, HOF, MBH) [29] with the L2-normalized video predictions
of our ST-ResNet*, again without softmax normalization. The results are shown in Table 4 (right)
where we observe a notable boost in accuracy of our approach on HMDB51, albeit less on UCF101.
Note that unlike our approach, the other approaches in Table 4 (right) suffer considerably larger
performance drops when used without IDT, e.g. C3D [26] reduces to 85.2% on UCF101, while
Dynamic Image Networks [2] reduces to 76.9% on UCF101 and 42.8% on HMDB51. These
relatively larger performance decrements again underline that our approach is better able to capture
the available dynamic information, as there is less to be gained by augmenting it with IDT. Still, there
is a benefit from the hand-crafted IDT features even with our approach, which could be attributed to
its explicit compensation of camera motion. Overall, our 94.6% on UCF101 and 70.3% HMDB51
clearly sets a new state-of-the-art on these widely used action recognition datasets.
5
Conclusion
We have presented a novel spatiotemporal ResNet architecture for video-based action recognition. In
particular, our approach is the first to combine two-stream with residual networks and to show the
great advantage that results. Our ST-ResNet allows the hierarchical learning of spacetime features
by connecting the appearance and motion channels of a two-stream architecture. Furthermore, we
transfer both streams from the spatial to the spatiotemporal domain by transforming the dimensionality
mapping filters of a pre-trained model into temporal convolutions, initialized as residual filters over
time. The whole system is trained end-to-end and achieves state-of-the-art performance on two
popular action recognition datasets.
Acknowledgments. This work was supported by the Austrian Science Fund (FWF) under project
P27076 and NSERC. The GPUs used for this research were donated by NVIDIA. Christoph
Feichtenhofer is a recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Institute
of Electrical Measurement and Measurement Signal Processing, Graz University of Technology.
8
References
[1] Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networks for
learning video representations. In Proc. ICLR, 2016.
[2] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould. Dynamic image networks for action
recognition. In Proc. CVPR, 2016.
[3] Richard T Born and Roger BH Tootell. Segregation of global and local motion processing in primate
middle temporal visual area. Nature, 357(6378):497?499, 1992.
[4] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan,
Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and
description. In Proc. CVPR, 2015.
[5] Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. Convolutional two-stream network fusion for
video action recognition. In Proc. CVPR, 20116.
[6] M. A. Goodale and A. D. Milner. Separate visual pathways for perception and action. Trends in
Neurosciences, 15(1):20?25, 1992.
[7] Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun. Unsupervised feature
learning from temporal data. In Proc. ICCV, 2015.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks.
arXiv preprint arXiv:1603.05027, 2016.
[10] Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns
with low-rank filters for efficient image classification. In Proc. ICLR, 2016.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In Proc. ICML, 2015.
[12] S. Ji, W. Xu, M. Yang, and K. Yu. 3D convolutional neural networks for human action recognition. IEEE
PAMI, 35(1):221?231, 2013.
[13] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification
with convolutional neural networks. In Proc. CVPR, 2014.
[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, 2012.
[15] Hildegard Kuehne, Hueihan Jhuang, Est?baliz Garrote, Tomaso Poggio, and Thomas Serre. HMDB: a large
video database for human motion recognition. In Proc. ICCV, 2011.
[16] Quoc V Le, Will Y Zou, Serena Y Yeung, and Andrew Y Ng. Learning hierarchical invariant spatiotemporal features for action recognition with independent subspace analysis. In Proc. CVPR, 2011.
[17] Behrooz Mahasseni and Sinisa Todorovic. Regularizing long short term memory with 3D human-skeleton
sequences for action recognition.
[18] Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and
George Toderici. Beyond short snippets: Deep networks for video classification. In Proc. CVPR, 2015.
[19] Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. Action recognition using visual attention. In
NIPS workshop on Time Series. 2015.
[20] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In
NIPS, 2014.
[21] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2014.
[22] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions
calsses from videos in the wild. Technical Report CRCV-TR-12-01, 2012.
[23] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking
the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015.
[24] Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of
residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.
[25] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In
Proc. ECCV, 2010.
[26] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D
convolutional networks. In Proc. ICCV, 2015.
[27] David C Van Essen and Jack L Gallant. Neural mechanisms of form and motion processing in the primate
visual system. Neuron, 13(1):1?10, 1994.
[28] A. Vedaldi and K. Lenc. MatConvNet ? convolutional neural networks for MATLAB. In Proceeding of the
ACM Int. Conf. on Multimedia, 2015.
[29] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In Proc. ICCV, 2013.
[30] Limin Wang, Yu Qiao, and Xiaoou Tang. Action recognition with trajectory-pooled deep-convolutional
descriptors. In Proc. CVPR, 2015.
[31] Xiaolong Wang, Ali Farhadi, and Abhinav Gupta. Actions ~ transformations. In Proc. CVPR, 2016.
[32] C. Zach, T. Pock, and H. Bischof. A duality based approach for realtime TV-L1 optical flow. In Proc.
DAGM, pages 214?223, 2007.
9
| 6433 |@word multitask:1 middle:1 advantageous:1 underline:2 wla:1 rgb:5 sgd:3 thereby:2 tr:1 carry:1 moment:2 reduction:2 born:1 series:1 score:4 denoting:1 ours:2 interestingly:2 guadarrama:1 com:1 comparing:3 skipping:2 anne:1 activation:3 yet:3 gpu:1 readily:1 gavves:1 additive:3 shape:1 enables:1 christian:3 designed:1 drop:1 recept:1 mbh:1 v:2 fund:1 cue:2 fewer:2 amir:1 accordingly:1 beginning:1 vanishing:1 short:4 provides:2 cse:1 toronto:1 location:2 zhang:2 five:1 height:3 constructed:1 direct:2 consists:1 conv3_1:1 pathway:5 fullyconnected:1 combine:1 wild:1 manner:1 introduce:1 notably:4 paluri:1 tomaso:1 kiros:1 inspired:1 salakhutdinov:1 researched:1 toderici:2 equipped:4 window:4 str:2 farhadi:1 spain:1 provided:1 moreover:3 underlying:1 project:1 alexnet:1 what:1 developed:2 transformation:8 temporal:62 donated:1 classifier:3 unit:12 arguably:1 before:4 understood:1 local:4 aggregating:1 attend:1 pock:1 severely:1 accumulates:1 id:2 path:1 pami:1 suggests:1 challenging:3 christoph:3 ease:1 factorization:1 limited:3 averaged:1 acknowledgment:1 camera:2 lecun:2 testing:4 practice:3 block:6 backpropagation:4 x1x3:4 procedure:1 displacement:1 area:3 rnn:2 significantly:3 thought:1 ucf101:16 vedaldi:2 pre:3 spatiotemporally:1 sudheendra:1 seeing:1 doubled:1 bh:1 tootell:1 vijayanarasimhan:1 applying:1 sukthankar:1 map:3 straightforward:1 attention:1 starting:1 duration:1 focused:1 resolution:1 immediately:1 rule:1 shlens:1 regularize:1 variation:3 resp:1 target:1 trigger:2 milner:1 massive:1 neighbouring:1 us:5 designing:1 hypothesis:1 trend:1 robertson:1 recognition:35 particularly:1 asymmetric:2 database:1 bottom:1 preprint:4 electrical:1 capture:5 wang:3 zamir:1 graz:3 region:1 connected:6 sun:2 decrease:1 movement:1 rescaled:2 observes:1 transforming:2 skeleton:2 pinz:3 goodale:1 dynamic:6 jittering:2 trained:11 carrying:1 tight:1 ali:1 purely:1 division:1 joint:1 xiaoou:1 represented:1 various:1 pool1:2 train:4 roshan:1 effective:1 pool5:5 refined:1 emerged:1 widely:2 larger:4 encoded:1 cvpr:8 resizing:2 simonyan:2 transform:2 jointly:2 highlighted:3 final:4 noisy:1 sequence:6 advantage:4 jamie:1 interaction:5 product:1 tran:1 relevant:1 combining:1 date:2 achieve:2 academy:1 description:1 competition:1 normalize:1 billion:1 sutskever:1 darrell:1 extending:3 r1:2 cropping:3 categorization:1 perfect:1 produce:2 wilde:2 resnet:31 object:6 recurrent:4 wider:1 augmenting:1 pose:2 andrew:3 rescale:1 minor:1 bourdev:1 strong:1 implemented:2 skip:10 goroshin:1 closely:1 annotated:1 functionality:1 filter:32 drawback:2 subsequently:2 stochastic:2 human:9 criminisi:1 enable:3 jonathon:1 cnns:1 generalization:3 preliminary:1 ryan:1 bregler:1 batchsize:1 hold:1 around:2 considered:3 great:1 mapping:7 predict:1 matthew:1 matconvnet:2 ventral:2 consecutive:1 smallest:2 achieves:3 vary:1 early:1 estimation:3 proc:18 integrates:1 injecting:2 outperformed:1 ruslan:1 ross:1 sensitive:2 wl:7 weighted:1 clearly:1 always:1 modified:2 rather:2 avoid:1 conv5_1:2 varying:1 conjunction:1 ax:2 improvement:1 modelling:3 indicates:1 rank:1 greatly:2 baseline:3 sense:1 inference:2 feichtenhofer:5 leung:1 dagm:1 typically:1 pad:1 hidden:2 initially:2 expand:1 transformed:2 pixel:1 overall:2 classification:11 art:9 spatial:26 initialize:2 softmax:8 field:11 having:1 ng:2 sampling:4 cordelia:1 identical:1 yu:2 unsupervised:3 icml:1 report:2 idt:11 fundamentally:1 richard:2 employ:1 torresani:1 randomly:6 recognize:1 hmdb51:14 replaced:1 phase:2 detection:1 highly:1 essen:1 multiply:1 evaluation:4 introduces:2 bilen:1 bracket:3 tompson:1 xiaolong:1 chain:1 poggio:1 tdd:1 hausknecht:1 decoupled:1 divide:1 conv4_1:1 initialized:7 taylor:1 increased:1 column:4 earlier:1 stacking:7 introducing:2 cost:1 xal:11 hof:1 krizhevsky:1 pal:1 varies:1 spatiotemporal:31 considerably:2 combined:1 chunk:9 st:18 lstm:6 serena:1 khurram:1 axel:3 v4:1 pool:2 connecting:1 yao:1 augmentation:2 postulate:1 again:2 choose:1 slowly:2 conf:1 inject:2 rescaling:1 li:1 szegedy:3 potential:1 nonlinearities:1 diversity:1 distribute:1 stride:10 pooled:4 sec:2 availability:1 includes:1 summarized:1 titan:1 kate:1 notable:2 caused:1 register:1 int:1 stream:104 performed:1 overfits:1 apparently:1 red:1 start:1 aggregation:1 accuracy:9 convolutional:34 descriptor:3 largely:1 variance:2 yield:3 modelled:1 vincent:2 ren:2 venugopalan:1 trajectory:4 researcher:1 classified:1 trevor:1 against:1 rbms:1 frequency:1 e2:2 naturally:1 attributed:1 propagated:1 gain:2 dataset:4 treatment:1 popular:1 intensively:1 kuehne:1 dimensionality:7 improves:2 segmentation:1 subtle:1 feed:1 bidirectional:1 follow:1 response:2 improved:3 zisserman:3 evaluated:1 though:1 strongly:1 furthermore:1 inception:6 just:1 equipping:1 stage:2 convnets:13 roger:1 hand:3 working:1 horizontal:3 lstms:2 receives:1 nonlinear:1 propagation:1 undergoes:1 bnorm:1 indicated:2 facilitate:2 building:4 serre:1 requiring:1 normalized:2 remedy:1 evolution:2 hence:1 regularization:1 spatially:2 adjacent:1 during:3 width:3 inferior:1 noted:1 backpropagates:1 chaining:1 m:2 outline:1 demonstrate:2 performs:2 motion:34 l1:1 image:22 instantaneous:1 novel:4 recently:2 jack:1 ji:1 ballas:1 gently:1 extend:2 he:2 measurement:2 smoothness:1 tuning:1 outlined:2 similarly:3 inclusion:1 centre:1 replicating:2 softmaxed:1 bruna:1 cortex:2 longer:1 similarity:1 operating:1 base:2 add:1 sergio:1 own:1 recent:3 coco:1 disappointing:1 store:1 nvidia:1 meta:1 success:1 continue:1 exploited:1 captured:2 greater:1 george:1 converting:1 shikhar:1 xiangyu:2 fernando:1 period:1 sharma:1 signal:5 vgg16:5 branch:1 siamese:2 multiple:2 isa:1 afterwards:1 reduces:3 exceeds:1 technical:2 faster:1 smooth:1 cross:2 long:4 compensate:1 divided:1 equally:1 impact:1 prediction:3 crop:3 multilayer:1 vision:2 austrian:2 arxiv:8 repetitive:1 sergey:3 resnets:10 kernel:2 arisen:1 normalization:11 deterioration:1 achieved:2 cell:1 represent:1 cropped:1 separately:1 fine:1 decreased:1 background:1 fellowship:1 jian:2 operate:3 unlike:1 lenc:1 pass:1 yue:1 pooling:14 flow:10 nontrivially:2 fwf:1 leverage:3 symmetrically:1 intermediate:1 split:2 enough:1 identically:1 shotton:1 variety:1 yang:1 relu:20 finish:1 architecture:34 suboptimal:1 reduce:4 inner:1 vgg:2 shift:2 bottleneck:2 t0:1 motivated:1 colour:2 passed:1 accelerating:1 soomro:1 suffer:1 karen:1 york:1 shaoqing:2 todorovic:1 action:36 matlab:1 deep:13 antonio:1 generally:1 listed:1 karpathy:1 amount:4 category:1 reduced:4 http:1 neuroscience:3 estimated:1 jpegs:1 key:1 backward:1 lowering:1 halfway:1 convert:4 sum:2 enforced:1 fourth:1 jitter:1 clipped:1 throughout:1 architectural:2 yann:1 patch:1 realtime:1 doc:1 coherence:1 duncan:1 scaling:1 garrote:1 dropout:3 layer:42 capturing:1 followed:6 spacetime:2 courville:1 replaces:1 conv5_2:3 occur:1 deficiency:1 fei:2 scene:1 dominated:1 aspect:1 extremely:2 performing:3 optical:8 relatively:1 conjecture:1 gpus:1 gould:1 tv:1 according:3 combination:2 precompute:1 across:8 slightly:1 smaller:1 subhashini:1 modification:1 primate:2 quoc:1 iccv:4 invariant:1 rectification:1 computationally:1 segregation:1 previously:1 hei:1 mechanism:1 letting:1 flip:1 fed:2 end:9 serf:1 generalizes:1 operation:5 available:2 apply:5 observe:3 hierarchical:3 batch:20 alternative:2 shetty:1 eigen:1 x3x1:6 shah:1 original:4 recipient:1 top:1 denotes:1 thomas:1 tugraz:2 ioannou:1 exploit:2 build:2 especially:2 comparatively:1 unchanged:2 move:1 arrangement:1 flipping:1 receptive:9 strategy:1 responds:1 gradient:9 iclr:3 subspace:1 convnet:14 separate:4 unable:2 deficit:1 distance:1 penultimate:1 rethinking:1 chris:1 extent:2 equip:1 marcus:1 code:1 relationship:2 ratio:1 downsampling:1 potentially:1 hog:1 conv5_3:3 wojna:1 design:4 guideline:2 c3d:3 zbigniew:1 gated:1 gallant:1 vertical:1 convolution:9 neuron:2 datasets:10 observation:1 benchmark:2 snippet:1 descent:2 compensation:1 flop:1 hinton:1 frame:22 stack:4 david:2 gru:1 connection:32 imagenet:7 bischof:1 fv:1 learned:4 qiao:1 barcelona:1 boost:3 nip:4 able:3 suggested:1 beyond:1 below:3 exemplified:1 xm:3 pattern:2 hendricks:1 perception:1 challenge:1 including:1 memory:2 video:34 max:7 green:1 demanding:1 residual:42 improve:2 github:1 technology:3 abhinav:1 numerous:1 temporally:2 created:1 schmid:1 resnet50:3 yani:1 roberto:1 joan:1 yeung:1 l2:2 loss:11 fully:9 expect:1 discriminatively:1 interesting:1 attache:1 filtering:2 vg:1 validation:4 vanhoucke:2 affine:1 consistent:1 conv1:23 principle:2 viewpoint:1 heng:1 classifying:1 share:3 row:2 eccv:1 jhuang:1 supported:2 last:5 keeping:1 bias:3 allow:3 side:2 deeper:2 institute:1 lisa:1 face:1 fifth:2 benefit:4 distributed:1 overcome:2 depth:3 dimension:2 van:1 forward:2 avoided:1 subtracts:1 obtains:1 global:2 overfitting:1 ioffe:3 spatio:1 discriminative:2 fergus:2 decomposes:1 table:16 additionally:1 channel:6 learn:8 transfer:1 ca:1 nicolas:1 delving:1 nature:1 expansion:1 investigated:2 complex:2 zou:1 monga:1 domain:7 protocol:1 did:1 decrement:1 whole:2 border:2 fair:1 xu:1 fig:4 crafted:3 slow:2 sub:1 momentum:1 explicit:1 zach:1 xl:12 third:2 late:3 mubarak:1 learns:1 donahue:1 tang:1 specific:1 covariate:2 showing:2 learnable:2 svm:2 gupta:1 normalizing:1 fusion:10 incorporating:1 essential:1 joe:1 albeit:1 adding:1 workshop:1 importance:2 gained:1 illumination:1 led:2 fc:2 simply:1 appearance:29 explore:1 rohrbach:1 hmdb:1 visual:9 sinisa:1 vinyals:1 limin:1 ordered:1 nserc:1 tracking:1 kaiming:2 pretrained:4 cipolla:1 corresponds:1 extracted:1 acm:1 x1x1:10 identity:1 towards:1 jeff:1 replace:1 shortcut:1 change:2 except:2 operates:1 reducing:1 averaging:5 tack:1 crcv:1 multimedia:1 pas:4 discriminate:1 duality:1 est:1 saenko:1 indicating:1 formally:1 aaron:1 internal:2 support:2 latter:1 modulated:1 dorsal:2 jonathan:1 rajat:1 oriol:1 evaluate:2 regularizing:1 iteration:8 correlated:1 |
6,007 | 6,434 | Adaptive Smoothed Online Multi-Task Learning
Keerthiram Murugesan?
Carnegie Mellon University
kmuruges@cs.cmu.edu
Hanxiao Liu?
Carnegie Mellon University
hanxiaol@cs.cmu.edu
Jaime Carbonell
Carnegie Mellon University
jgc@cs.cmu.edu
Yiming Yang
Carnegie Mellon University
yiming@cs.cmu.edu
Abstract
This paper addresses the challenge of jointly learning both the per-task model
parameters and the inter-task relationships in a multi-task online learning setting.
The proposed algorithm features probabilistic interpretation, efficient updating
rules and flexible modulation on whether learners focus on their specific task or
on jointly address all tasks. The paper also proves a sub-linear regret bound as
compared to the best linear predictor in hindsight. Experiments over three multitask learning benchmark datasets show advantageous performance of the proposed
approach over several state-of-the-art online multi-task learning baselines.
1
Introduction
The power of joint learning in multiple tasks arises from the transfer of relevant knowledge across
said tasks, especially from information-rich tasks to information-poor ones. Instead of learning
individual models, multi-task methods leverage the relationships between tasks to jointly build
a better model for each task. Most existing work in multi-task learning focuses on how to take
advantage of these task relationships, either to share data directly [1] or to learn model parameters via
cross-task regularization techniques [2, 3, 4]. In a broad sense, there are two settings to learn these
task relationships 1) batch learning, in which an entire training set is available to the learner 2) online
learning, in which the learner sees the data in a sequential fashion. In recent years, online multi-task
learning has attracted extensive research attention [5, 6, 7, 8, 9].
Following the online setting, particularly from [6, 7], at each round t, the learner receives a set of K
observations from K tasks and predicts the output label for each of these observations. Subsequently,
the learner receives the true labels and updates the model(s) as necessary. This sequence is repeated
over the entire data, simulating a data stream. Our approach follows an error-driven update rule in
which the model for a given task is updated only when the prediction for that task is in error. The goal
of an online learner is to minimize errors compared to the full hindsight learner. The key challenge
in online learning with large number of tasks is to adaptively learn the model parameters and the
task relationships, which potentially change over time. Without manageable efficient updates at each
round, learning the task relationship matrix automatically may impose a severe computational burden.
In other words, we need to make predictions and update the models in an efficient real time manner.
We propose an online learning framework that efficiently learns multiple related tasks by estimating
the task relationship matrix from the data, along with the model parameters for each task. We learn
the model for each task by sharing data from related task directly. Our model provides a natural
way to specify the trade-off between learning the hypothesis from each task?s own (possibly quite
?
Both student authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
limited) data and data from multiple related tasks. We propose an iterative algorithm to learn the task
parameters and the task-relationship matrix alternatively. We first describe our proposed approach
under a batch setting and then extend it to the online learning paradigm. In addition, we provide a
theoretical analysis for our online algorithm and show that it can achieve a sub-linear regret compared
to the best linear predictor in hindsight. We evaluate our model with several state-of-the-art online
learning algorithms for multiple tasks.
There are many useful application areas for online multi-task learning, including optimizing financial
trading, email prioritization, personalized news, and spam filtering. Consider the latter, where some
spam is universal to all users (e.g. financial scams), some messages might be useful to certain affinity
groups, but spam to most others (e.g. announcements of meditation classes or other special interest
activities), and some may depend on evolving user interests. In spam filtering each user is a task,
and shared interests and dis-interests formulate the inter-task relationship matrix. If we can learn
the matrix as well as improving models from specific spam/not-spam decisions, we can perform
mass customization of spam filtering, borrowing from spam/not-spam feedback from users with
similar preferences. The primary contribution of this paper is precisely the joint learning of inter-task
relationships and its use in estimating per-task model parameters in an online setting.
1.1
Related Work
While there is considerable literature in online multi-task learning, many crucial aspects remain largely
unexplored. Most existing work in online multi-task learning focuses on how to take advantage of
task relationships. To achieve this, Lugosi et. al [7] imposed a hard constraint on the K simultaneous
actions taken by the learner in the expert setting, Agarwal et. al [10] used matrix regularization, and
Dekel et. al [6] proposed a global loss function, as an absolute norm, to tie together the loss values of
the individual tasks. Different from existing online multi-task learning models, our paper proposes an
intuitive and efficient way to learn the task relationship matrix automatically from the data, and to
explicitly take into account the learned relationships during model updates.
Cavallanti et. al [8] assumes that task relationships are available a priori. Kshirsagar et. al [11]
does the same but in a more adaptive manner. However such task-relation prior knowledge is either
unavailable or infeasible to obtain for many applications especially when the number of tasks K
is large [12] and/or when the manual annotation of task relationships is expensive [13]. Saha et.
al [9] formulated the learning of task relationship matrix as a Bregman-divergence minimization
problem w.r.t. positive definite matrices. The model suffers from high computational complexity as
semi-definite programming is required when updating the task relationship matrix at each online
round. We show that with a different formulation, we can obtain a similar but much cheaper updating
rule for learning the inter-task weights.
The most related work to ours is Shared Hypothesis model (SHAMO) from Crammer and Mansour
[1], where the key idea is to use a K-means-like procedure that simultaneously clusters different
tasks and learns a small pool of m K shared hypotheses. Specifically, each task is free to choose
a hypothesis from the pool that better classifies its own data, and each hypothesis is learned from
pooling together all the training data that belongs to the same cluster. A similar idea was explored by
Abernathy et. al [5] under expert settings.
2
2.1
Smoothed Multi-Task Learning
Setup
Suppose we are given K tasks where the j th task is associated with Nj training examples. For brevity
we consider a binary classification problem for each task, but the methods generalize to multi-class
and are also applicable to regression tasks. We denote by [N ] the consecutive integers ranging from
(i) (i) Nj
P
(i) (i)
1 to N . Let (xj , yj ) i=1 and Lj (w) = N1j i?[Nj ] 1 ? yj hxj , wi + be the training set and
(i)
batch empirical loss for task j, respectively, where (z)+ = max(0, z), xj ? Rd is the ith instance
(i)
from the j th task and yj is its corresponding true label.
We start from the motivation of our formulation in Section 2.2, based on which we first propose a
batch formulation in Section 2.3. Then, we extend the method to the online setting in Section 2.4.
2
2.2
Motivation
Learning tasks may be addressed independently via wk? = argminwk Lk (wk ), ?k ? [K]. However,
when each task has limited training data, it is often beneficial to allow information sharing among the
tasks, which can be achieved via the following optimization:
X
wk? = argminwk
?kj Lj (wk ) ?k ? [K]
(1)
j?[K]
Beyond each task k, optimization (1) encourages hypothesis wk? to do well on the remaining K ? 1
tasks thus allowing tasks to borrow information from each other. In the extreme
case where the K
P
tasks have an identical data distribution, optimization (1) amounts to using j?[K] Nj examples for
training as compared to Nk in independent learning.
The weight matrix ? is in essence a task relationship matrix, and a prior may be manually specified
according to domain knowledge about the tasks. For instance, ?kj would typically be set to a large
value if tasks k and j share similar nature. If ? = I, (1) reduces to learning tasks independently. It
is clear that manual specification of ? is feasible only when K is small. Moreover, tasks may be
statistically correlated even if a domain expert is unavailable to identify an explicit relation, or if
the effort required is too great. Hence, it is often desirable to automatically estimate the optimal ?
adapted to the inter-task problem structure.
We propose to learn ? in a data-driven manner. For the k th task, we optimize
X
wk? , ?k? = argminwk ,?k ??
?kj Lj (wk ) + ?r(?k )
(2)
j?[K]
where ? defines the feasible domain of ?k , and regularizer r prevents degenerate cases, e.g., where
?k becomes an all-zero vector. Optimization (2) shares the same underlying insight with Self-Paced
Learning (SPL) [14, 15] where the algorithm automatically learns the weights over data points during
training. However, the process and scope in the two methods differ fundamentally: SPL minimizes
the weighted loss over datapoints within a single domain, while optimization (2) minimizes the
weighted loss over multiple tasks across possibly heterogeneous domains.
K
A common choice of ? and r(?k ) in SPL is ? = [0, 1] and r(?k ) = ?k?k k1 . There are several
drawbacks of naively applying this type of settings to the multitask scenario: (i) Lack of focus:
there is no guarantee that the k th learner will put more focus on the k th task itself. When task k is
?
intrinsically difficult, ?kk
could simply be set near zero and wk? becomes almost independent of the
th
k task. (ii) Weak interpretability, the learned ?k? may not be interpretable as it is not directly tied to
any physical meanings (iii) Lack of worst-case guarantee in the online setting. All those issues will
be addressed by our proposed model in the following.
2.3
Batch Formulation
We parametrize the aforementioned task relationship matrix ? ? RK?K as follows:
? = ?I K + (1 ? ?) P
(3)
where I K ? RK?K is an identity matrix, P ? RK?K is a row-stochastic matrix and ? is a scalar in
[0, 1]. Task relationship matrix ? defined as above has the following interpretations:
1. Concentration Factor ? quantifies the learners? ?concentration? on their own tasks. Setting
? = 1 amounts to independent learning. We will see from the forthcoming Theorem 1 how
to specify ? to ensure the optimality of the online regret bound.
2. Smoothed Attention Matrix P quantifies to which degree the learners are attentive to all tasks.
Specifically, define the k th row of P , namely pk ? ?K?1 , as a probability distribution over
all tasks where ?K?1 denotes the probability simplex. Our goal of learning a data-adaptive
? now becomes learning a data-adaptive attention matrix P .
Common choices about ? in several existing algorithms are special cases of (3). For instance, domain
adaptation assumes ? = 0 and a fixed row-stochastic matrix P ; in multi-task learning, we obtain the
3
1
1
effective heuristics of specifying ? by Cavallanti et. al. [8] when ? = 1+K
and P = K
11> . When
there are m K unique distributions pk , then the problem reduces to SHAMO model [1].
Equation (3) implies the task relationship matrix ? is also row-stochastic, where we always reserve
probability ? for the k th task itself as ?kk ? ?. For each learner, the presence of ? entails a trade off
between learning from other tasks and concentrating on its own task. Note that we do not require
P to be symmetric due to the asymmetric nature of information transferability?while classifiers
trained on a resource-rich task can be well transferred to a resource-scarce task, the inverse is not
usually true. Motivated by the above discussion, our batch formulation instantiates (2) as follows
X
wk? , p?k = argminwk ,pk ??K?1
?kj (pk )Lj (wk ) ? ?H (pk )
(4)
j?[K]
= argminwk ,pk ??K?1 Ej?M ultinomial(?k (pk )) Lj (wk ) ? ?H (pk )
(5)
P
where H(pk ) = ? j?[K] pkj log pkj denotes the entropy of distribution pk . Optimization (4) can be
viewed as to balance between minimizing the cross-task loss with mixture weights ?k and maximizing
the smoothness of cross-task attentions. The max-entropy regularization favours a uniform attention
over all tasks and leads to analytical updating rules for pk (and ?k ).
(t)
Optimization (4) is biconvex over wk and pk . With pk fixed, solution for wk can be obtained using
(t)
off-the-shelf solvers. With wk fixed, solution for pk is given in closed-form:
(t+1)
pkj
e?
=P
K
j 0 =1
(t)
1??
? Lj (wk )
e?
(t)
1??
? Lj 0 (wk )
?j ? [K]
(6)
The exponential updating rule in (6) has an intuitive interpretation. That is, our algorithm attempts
(t)
to use hypothesis wk obtained from the k th task to classify training examples in all other tasks.
Task j will be treated as related to task k if its training examples can be well classified by wk . The
intuition is that two tasks are likely to relate to each other if they share similar decision boundaries,
thus combining their associated data should yield to a stronger model, trained over larger data.
2.4
Online Formulation
In this section, we extend our batch formulation to the online setting. We assume that all tasks will be
performed at each round, though the assumption can be relaxed with some added complexity to the
(t)
(t)
(t)
method. At time t, the k th task receives a training instance xk , makes a prediction hxk , wk i and
(t)
suffers a loss after y is revealed. Our algorithm follows a error-driven update rule in which the
model is updated only when a task makes a mistake.
(t)
(t)
(t)
(t)
(t)
(t)
Let `kj (w) = 1 ? yj hxj , wi if yj hxj , wk i < 1 and `kj (w) = 0 otherwise. For brevity, we
(t)
(t)
(t)
(t)
(t)
introduce shorthands `kj = `kj (wk ) and ?kj = ?kj (pk ).
For the k th task we consider the following optimization problem at each time:
(t+1)
wk
(t+1)
, pk
= argmin
wk ,pk ??K?1
C
X
(t)
(t)
(t)
?kj (pk )`kj (wk ) + kwk ? wk k2 + ?DKL pk kpk
(7)
j?[K]
P
(t)
(t)
(t)
where j?[K] ?kj (pk )`kj (wk ) = Ej?M ulti(?k (pk )) `kj (wk ), and DKL pk kpk denotes the Kullback?Leibler (KL) divergence between current and previous soft-attention distributions. The presence
of last two terms in (7) allows the model parameters to evolve smoothly over time. Optimization
(7) is naturally analogous to the batch optimization (4), where the batch loss Lj (wk ) is replaced
P
(t)
by its noisy version `kj (wk ) at time t, and negative entropy ?H(pk ) = j pkj log pkj is replaced
(t)
by DKL (pk kpk ) also known as the relative entropy. We will show the above formulation leads to
analytical updating rules for both wk and pk , a desirable property particularly as an online algorithm.
4
(t+1)
Solution for wk
(t+1)
wk
(t)
conditioned on pk is given in closed-form by the proximal operator
X
(t)
(t) (t)
(t)
= prox(wk ) = argminwk C
?kj (pk )`kj (wk ) + kwk ? wk k2
(8)
j?[K]
=
(t)
wk
(t)
X
+C
(t)
(t)
(t) (t)
?kj (pk )yj xj
(9)
(t)
j:yj hxj ,wk i<1
(t+1)
Solution for pk
(t)
conditioned on wk is also given in closed-form, analogous to mirror descent [16]
X
(t+1)
(t)
(t)
pk
= argminpk ??K?1 C(1 ? ?)
pkj `kj + ?DKL pk kpk
(10)
j?[K]
(t+1)
=? pkj
=P
C(1??) (t)
(t)
pkj e? ? `kj
(t) ?
j0
pkj 0 e
C(1??) (t)
`kj 0
?
j ? [K]
(11)
The pseudo-code is in Algorithm 22 . Our algorithm is ?passive? in the sense that updates are carried
(t)
(t)
out only when a classification error occurs, namely when y?k 6= yk . An alternative is to perform
(t) (t)
(t)
?aggressive? updates only when the active set {j : yj hxj , wk i < 1} is non-empty.
Algorithm 1: Batch Algorithm (SMTL-e)
Algorithm 2: Online Algorithm (OSMTL-e)
while not converge do
for k ? [K] do
(t)
wk ? argminwk ?Lk (wk ) + (1 ?
P
(t)
?) j?[K] pkj Lj (wk );
for j ? [K] do
for t ? [T ] do
for k ? [K] do
(t)
(t)
(t)
if yk hxk , wk i < 1 then
(t+1)
(t)
(t) (t)
wk
? wk + C?1`(t) >0 yk xk +
kk
P
(t) (t) (t)
C(1 ? ?) j:`(t) >0 pkj yj xj ;
(t+1)
pkj
?
(t)
? 1?? Lj (w
)
?
k
(t)
1??
PK
?
L 0 (w
)
j
?
k
e
j 0 =1
e
kj
;
for j ? [K] do
(t+1)
end
pkj
end
t ? t + 1;
(t) ?
pkj e
?
PK
j 0 =1
C(1??) (t)
`
?
kj
(t) ?
e
kj 0
p
C(1??) (t)
` 0
?
kj
;
end
end
else
(t+1)
wk
(t+1)
, pk
(t)
(t)
? wk , pk ;
end
end
end
2.5
Regret Bound
Theorem 1. ?k ? [K], let Sk =
(t)
xk
(t)
Rd , y k
(t)
(t) T
be a
t=1
(t)
kxk k2 ? R,
xk , yk
sequence of T examples for the k th task
where
?
? {?1, +1} and
?t ? [T ]. Let C be a positive constant
and let ? be some predefined parameter in [0, 1]. Let {wk? }k?[K] be any arbitrary vectors where
(t) (t)
(t) (t)
(t)?
wk? ? Rd and its hinge loss on the examples xk , yk and xj , yj j6=k are given by `kk =
(t) (t)
(t)?
(t) (t)
1 ? yk hxk , wk? i + and `kj = 1 ? yj hxj , wk? i + , respectively.
If {Sk }k?[K] is presented to OSMTL algorithm, then ?k ? [K] we have
X
(t)
(t)?
(`kk ? `kk
t?[T ]
?
CR2 T
1
(1 ? ?)T (t)?
(t)?
kwk? k2 +
`kk + max `kj +
2C?
?
2?
j?[K],j6=k
Notice when ? ? 1, the above reduces to the perceptron mistake bound [17].
2
It is recommended to set ? ?
?
T
?
1+ T
and C ?
?
1+ T
T
5
, as suggested by Corollary 2.
(12)
Corollary 2. Let ? =
X
?
T
?
1+ T
(t)
and C =
(t)?
(`kk ? `kk
t?[T ]
?
?
?
1+ T
T
T
in Theorem 1, we have
1 ? 2
(t)?
(t)?
kw k + `kk + max `kj + 2R2
2 k
j?[K],j6=k
(13)
Proofs are given in the supplementary. Theorem 1 and Corollary 2 have several implications:
(t)?
(t)?
1. Quality of the bound depends on both `kk and the maximum of {`kj }j?[K],j6=k . In other
words, the worst-case regret will be lower if the k th true hypothesis wk? can well distinguish
training examples in both the k th task itself as well as those in all the other tasks.
2. Corollary 2 indicates the difference between the cumulative loss achieved by our algorithm
and by any fixed hypothesis for task k is bounded by a term growing sub-linearly in T .
3. Corollary 2 provides a principled way to set hyperparameters to achieve the sub-linear
regret bound.
Specifically, recall ? quantifies the self-concentration of each task. Therefore,
?
T T ??
?
? = 1+ T ?? 1 implies for large horizon it would be less necessary to rely on other tasks
as available supervision for the task itself is already plenty; C =
diminishing learning rate over the horizon length.
3
?
1+ T T ??
??
T
0 suggests
Experiments
We evaluate the performance of our algorithm under batch and online settings. All reported results in
this section are averaged over 30 random runs or permutations of the training data. Unless otherwise
specified, all model parameters are chosen via 5-fold cross validation.
3.1
Benchmark Datasets
We use three datasets for our experiments. Details are given below:
Landmine Detection3 consists of 19 tasks collected from different landmine fields. Each task is a
binary classification problem: landmines (+) or clutter (?) and each example consists of 9 features
extracted from radar images with four moment-based features, three correlation-based features, one
energy ratio feature and a spatial variance feature. Landmine data is collected from two different
terrains: tasks 1-10 are from highly foliated regions and tasks 11-19 are from desert regions, therefore
tasks naturally form two clusters. Any hypothesis learned from a task should be able to utilize the
information available from other tasks belonging to the same cluster.
Spam Detection4 We use the dataset obtained from ECML PAKDD 2006 Discovery challenge for
the spam detection task. We used the task B challenge dataset which consists of labeled training
data from the inboxes of 15 users. We consider each user as a single task and the goal is to build
a personalized spam filter for each user. Each task is a binary classification problem: spam (+)
or non-spam (?) and each example consists of approximately 150K features representing term
frequency of the word occurrences. Since some spam is universal to all users (e.g. financial scams),
some messages might be useful to certain affinity groups, but spam to most others. Such adaptive
behavior of user?s interests and dis-interests can be modeled efficiently by utilizing the data from
other users to learn per-user model parameters.
Sentiment Analysis5 We evaluated our algorithm on product reviews from amazon. The dataset
contains product reviews from 24 domains. We consider each domain as a binary classification task.
Reviews with rating > 3 were labeled positive (+), those with rating < 3 were labeled negative (?),
reviews with rating = 3 are discarded as the sentiments were ambiguous and hard to predict. Similar
to the previous dataset, each example consists of approximately 350K features representing term
frequency of the word occurrences.
We choose 3040 examples (160 training examples per task) for landmine, 1500 emails for spam (100
emails per user inbox) and 2400 reviews for sentiment (100 reviews per domain) for our experiments.
3
http://www.ee.duke.edu/~lcarin/LandmineData.zip
http://ecmlpkdd2006.org/challenge.html
5
http://www.cs.jhu.edu/~mdredze/datasets/sentiment
4
6
3
2
2
1
1
19
4
3
18
5
4
17
6
5
16
7
6
15
8
7
14
9
8
13
10
9
12
11
10
9
12
11
11
13
12
8
14
13
7
15
14
10
16
15
6
19
18
17
16
15
14
13
12
9
8
11
300
7
250
10
200
6
150
5
100
4
50
3
0
2
0.5
1
0.55
17
16
5
0.6
18
17
4
0.65
19
18
3
AUC
0.7
19
2
0.75
1
STL
ITL
SHAMO
SMTL-t
SMTL-e
0.8
Training Size
Figure 1: Average AU C calculated for compared models (left). A visualization of the task relationship
matrix in Landmine learned by SMTL-t (middle) and SMTL-e (right). The probabilistic formulation
of SMTL-e allows it to discover more interesting patterns than SMTL-t.
Note that we intentionally kept the size of the training data small to drive the need for learning from
other tasks, which diminishes as the training sets per task become large. Since all these datasets have
a class-imbalance issue (with few (+) examples as compared to (?) examples), we use average Area
Under the ROC Curve (AU C) as the performance measure.
3.2
Batch Setting
Since the main focus of this paper is online learning, we briefly conduct an experiment on landmine
detection dataset for our batch learning to demonstrate the advantages of learning from shared data.
We implement two versions of our proposed algorithm with different updates: SMTL-t (SMTL with
(t+1)
(t)
thresholding updates) where pkj ? (? ? `kj )+ 6 and SMTL-e (SMTL with exponential updates)
as in Algorithm 1. We compare our SMTL* with two standard baseline methods for our batch setting:
Independent Task Learning (ITL)?learning a single model for each task and Single Task Learning
(STL)?learning a single classification model for pooled data from all the tasks. In addition we
compare our models with SHAMO, which is closest in spirit with our proposed models. We select the
value for ? and ? for SMTL* and M for SHAMO using cross validation.
Figure 1 (left) shows the average AU C calculated for different training size on landmine. We can see
that the baseline results are similar to the ones reported by Xue et. al [3]. Our proposed algorithm
(SMTL*) outperforms the other baselines but when we have very few training examples (say 20 per
task), the performance of STL improves as it has more examples than the others. Since ? depends
on the loss incurred on the data from related tasks, this loss-based measure can be unreliable for a
small training sample size. To our surprise, SHAMO performs worse than the other models which
tells us that assuming two tasks are exactly same (in the sense of hypothesis) may be inappropriate in
real-world applications. Figure 1 (middle & left) show the task relationship matrix ? for SMTL-t and
SMTL-e on landmine when the number of training instances is 160 per task.
3.3
Online Setting
To evaluate the performance of our algorithm in the online setting, we use all three datasets (landmine,
spam and sentiment) and compare our proposed methods to 5 baselines. We implemented two
variations of Passive-Aggressive algorithm (PA) [18]. PA-ITL learns independent model for each task
and PA-ONE builds a single model for all the tasks. We also implemented the algorithm proposed by
Dekel et. al for online multi-task learning with shared loss (OSGL) [6]. These three baselines do not
exploit the task-relationship or the data from other tasks during model update. Next, we implemented
two online multi-task learning related to our approach: FOML ? initializes ? with fixed weights [8],
Online Multi-Task Relationship Learning (OMTRL) [9] ? learns a task covariance matrix along with
task parameters. We could not find a better way to implement the online version of the SHAMO
algorithm, since the number of shared hypotheses or clusters varies over time.
6
Our algorithm and theorem can be easily generalized to other types of updating rules by replacing exp in
(6) with other functions. In latter cases, however, ? may no longer have probabilistic interpretations.
7
Table 1: Average performance on three datasets: means and standard errors over 30 random shuffles.
Models
PA-ONE
PA-ITL
OSGL
FOML
OMTRL
OSMTL-t
OSMTL-e
Landmine Detection
AUC
nSV
Time (s)
0.5473
2902.9
0.01
(0.12)
(4.21)
0.5986
618.1
0.01
(0.04)
(27.31)
0.6482
740.8
0.01
(0.03)
(42.03)
0.6322
426.5
0.11
(0.04)
(36.91)
0.6409
432.2
6.9
(0.05)
(123.81)
0.6776
333.6
0.18
(0.03)
(40.66)
0.6404
458
0.19
(0.04)
(36.79)
Spam Detection
AUC
nSV
Time (s)
0.8739 1455.0
0.16
(0.01)
(4.64)
0.8350 1499.9
0.16
(0.01)
(0.37)
0.9551 1402.6
0.17
(0.007) (13.57)
0.9347
819.8
1.5
(0.009) (18.57)
0.9343
840.4
53.6
(0.008) (22.67)
0.9509
809.5
1.4
(0.007) (19.35)
0.9596
804.2
1.3
(0.006) (19.05)
Sentiment Analysis
AUC
nSV
Time (s)
0.7193 2350.7
0.19
(0.03)
(6.36)
0.7364 2399.9
0.16
(0.02)
(0.25)
0.8375 2369.3
0.17
(0.02)
(14.63)
0.8472 1356.0
1.20
(0.02)
(78.49)
0.7831 1346.2
128
(0.02)
(85.99)
0.9354 1312.8
2.15
0.01
(79.15)
0.9465 1322.2
2.16
(0.01)
(80.27)
Table 1 summarizes the performance of all the above algorithms on the three datasets. In addition to
the AU C scores, we report the average total number of support vectors (nSV) and the CPU time taken
for learning from one instance (Time). From the table, it is evident that OSMTL* outperforms all the
baselines in terms of both AU C and nSV. This is expected for the two default baselines (PA-ITL and
PA-ONE). We believe that PA-ONE shows better result than PA-ITL in spam because the former learns
the global information (common spam emails) that is quite dominant in spam detection problem. The
update rule for FOML is similar to ours but using fixed weights. The results justify our claim that
making the weights adaptive leads to improved performance.
In addition to better results, our algorithm consumes less or comparable CPU time than the baselines
which take into account inter-task relationships. Compared to the OMTRL algorithm that recomputes
the task covariance matrix every iteration using expensive SVD routines, the adaptive weights in
our are updated independently for each task. As specified in [9], we learn the task weight vectors
for OMTRL separately as K independent perceptron for the first half of the training data available
(EPOCH=0.5). OMTRL potentially looses half the data without learning task-relationship matrix as
it depends on the quality of the task weight vectors.
It is evident from the table that algorithms which use loss-based update weights ? (OSGL, OSMTL*)
considerably outperform the ones that do not use it (FOML,OMTRL). We believe that loss incurred
per instance gives us valuable information for the algorithm to learn from that instance, as well as to
evaluate the inter-dependencies among tasks. That said, task relationship information does help by
learning from the related tasks? data, but we demonstrate that combining both the task relationship
and the loss information can give us a better algorithm, as is evident from our experiments.
We would like to note that our proposed algorithm OSMTL* does exceptionally better in sentiment,
which has been used as a standard benchmark application for domain adaptation experiments in
the existing literature [19]. We believe the advantageous results on sentiment dataset implies that
even with relatively few examples, effectively knowledge transfer among the tasks/domains can be
achieved by adaptively choosing the (probabilistic) inter-task relationships from the data.
4
Conclusion
We proposed a novel online multi-task learning algorithm that jointly learns the per-task hypothesis
and the inter-task relationships. The key idea is based on smoothing the loss function of each task
w.r.t. a probabilistic distribution over all tasks, and adaptively refining such distribution over time. In
addition to closed-form updating rules, we show our method achieves the sub-linear regret bound.
Effectiveness of our algorithm is empirically verified over several benchmark datasets.
Acknowledgments
This work is supported in part by NSF under grants IIS-1216282 and IIS-1546329.
8
References
[1] Koby Crammer and Yishay Mansour. Learning multiple tasks using shared hypotheses. In
Advances in Neural Information Processing Systems, pages 1475?1483, 2012.
[2] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature
learning. Machine Learning, 73(3):243?272, 2008.
[3] Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for
classification with dirichlet process priors. The Journal of Machine Learning Research, 8:35?63,
2007.
[4] Yu Zhang and Dit-Yan Yeung. A regularization approach to learning task relationships in
multitask learning. ACM Transactions on Knowledge Discovery from Data (TKDD), 8(3):12,
2014.
[5] Jacob Abernethy, Peter Bartlett, and Alexander Rakhlin. Multitask learning with expert advice.
In Learning Theory, pages 484?498. Springer, 2007.
[6] Ofer Dekel, Philip M Long, and Yoram Singer. Online learning of multiple tasks with a shared
loss. Journal of Machine Learning Research, 8(10):2233?2264, 2007.
[7] G?bor Lugosi, Omiros Papaspiliopoulos, and Gilles Stoltz. Online multi-task learning with
hard constraints. arXiv preprint arXiv:0902.3526, 2009.
[8] Giovanni Cavallanti, Nicolo Cesa-Bianchi, and Claudio Gentile. Linear algorithms for online
multitask classification. The Journal of Machine Learning Research, 11:2901?2934, 2010.
[9] Avishek Saha, Piyush Rai, Suresh Venkatasubramanian, and Hal Daume. Online learning of
multiple tasks and their relationships. In International Conference on Artificial Intelligence and
Statistics, pages 643?651, 2011.
[10] Alekh Agarwal, Alexander Rakhlin, and Peter Bartlett. Matrix regularization techniques for
online multitask learning. EECS Department, University of California, Berkeley, Tech. Rep.
UCB/EECS-2008-138, 2008.
[11] Meghana Kshirsagar, Jaime Carbonell, and Judith Klein-Seetharaman. Multisource transfer
learning for host-pathogen protein interaction prediction in unlabeled tasks. In NIPS Workshop
on Machine Learning for Computational Biology, 2013.
[12] Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature
hashing for large scale multitask learning. In Proceedings of the 26th Annual International
Conference on Machine Learning, pages 1113?1120. ACM, 2009.
[13] Meghana Kshirsagar, Jaime Carbonell, and Judith Klein-Seetharaman. Multitask learning for
host?pathogen protein interactions. Bioinformatics, 29(13):i217?i226, 2013.
[14] M Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable
models. In Advances in Neural Information Processing Systems, pages 1189?1197, 2010.
[15] Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander Hauptmann.
Self-paced learning with diversity. In Advances in Neural Information Processing Systems,
pages 2078?2086, 2014.
[16] A-S Nemirovsky, D-B Yudin, and E-R Dawson. Problem complexity and method efficiency in
optimization. 1982.
[17] Shai Shalev-Shwartz and Yoram Singer. Online learning: Theory, algorithms, and applications.
PhD Dissertation, 2007.
[18] Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. Online
passive-aggressive algorithms. The Journal of Machine Learning Research, 7:551?585, 2006.
[19] John Blitzer, Mark Dredze, Fernando Pereira, et al. Biographies, bollywood, boom-boxes and
blenders: Domain adaptation for sentiment classification. In ACL, volume 7, pages 440?447,
2007.
9
| 6434 |@word multitask:8 version:3 manageable:1 middle:2 advantageous:2 norm:1 stronger:1 dekel:4 briefly:1 nemirovsky:1 covariance:2 jacob:1 blender:1 keerthiram:1 moment:1 venkatasubramanian:1 liu:1 contains:1 score:1 ours:2 outperforms:2 existing:5 current:1 transferability:1 attracted:1 john:2 interpretable:1 update:14 half:2 intelligence:1 xk:5 ith:1 dissertation:1 provides:2 judith:2 preference:1 theodoros:1 org:1 zhang:1 daphne:1 along:2 become:1 shorthand:1 consists:5 introduce:1 manner:3 inter:9 expected:1 behavior:1 growing:1 multi:20 automatically:4 cpu:2 inappropriate:1 solver:1 becomes:3 spain:1 estimating:2 classifies:1 moreover:1 underlying:1 mass:1 bounded:1 discover:1 argmin:1 minimizes:2 loos:1 hindsight:3 n1j:1 nj:4 guarantee:2 pseudo:1 berkeley:1 unexplored:1 every:1 tie:1 exactly:1 jgc:1 classifier:1 k2:4 grant:1 positive:3 mistake:2 jiang:1 meng:1 modulation:1 approximately:2 lugosi:2 might:2 acl:1 au:5 specifying:1 suggests:1 limited:2 statistically:1 averaged:1 unique:1 acknowledgment:1 yj:11 regret:7 definite:2 implement:2 procedure:1 lcarin:1 pontil:1 j0:1 suresh:1 area:2 inbox:1 universal:2 evolving:1 empirical:1 jhu:1 yan:1 word:4 ecmlpkdd2006:1 krishnapuram:1 protein:2 unlabeled:1 operator:1 put:1 applying:1 optimize:1 jaime:3 imposed:1 www:2 maximizing:1 attention:6 independently:3 landminedata:1 convex:1 formulate:1 xuejun:1 amazon:1 rule:10 insight:1 utilizing:1 borrow:1 financial:3 datapoints:1 variation:1 analogous:2 updated:3 yishay:1 suppose:1 user:12 programming:1 prioritization:1 duke:1 hypothesis:14 pa:9 expensive:2 particularly:2 updating:8 asymmetric:1 balaji:1 predicts:1 labeled:3 murugesan:1 preprint:1 worst:2 region:2 news:1 kilian:1 trade:2 shuffle:1 consumes:1 yk:6 principled:1 intuition:1 valuable:1 benjamin:1 complexity:3 radar:1 trained:2 depend:1 efficiency:1 learner:12 easily:1 joint:2 regularizer:1 massimiliano:1 recomputes:1 describe:1 effective:1 artificial:1 tell:1 choosing:1 shalev:2 abernethy:1 quite:2 heuristic:1 larger:1 supplementary:1 say:1 otherwise:2 statistic:1 jointly:4 itself:4 noisy:1 online:41 advantage:3 sequence:2 analytical:2 propose:4 interaction:2 product:2 adaptation:3 relevant:1 combining:2 degenerate:1 achieve:3 intuitive:2 cluster:5 empty:1 yiming:2 help:1 piyush:1 blitzer:1 shamo:7 implemented:3 c:5 trading:1 implies:3 differ:1 drawback:1 filter:1 subsequently:1 stochastic:3 pkj:15 require:1 announcement:1 exp:1 great:1 lawrence:1 scope:1 predict:1 claim:1 reserve:1 achieves:1 consecutive:1 diminishes:1 applicable:1 label:3 weighted:2 minimization:1 always:1 ej:2 shelf:1 claudio:1 corollary:5 focus:6 refining:1 indicates:1 tech:1 baseline:9 sense:3 cr2:1 entire:2 lj:10 typically:1 diminishing:1 borrowing:1 relation:2 koller:1 issue:2 classification:9 flexible:1 among:3 aforementioned:1 priori:1 html:1 proposes:1 multisource:1 smoothing:1 art:2 special:2 spatial:1 field:1 evgeniou:1 manually:1 biology:1 identical:1 kw:1 koby:2 broad:1 carin:1 yu:2 plenty:1 simplex:1 others:3 report:1 fundamentally:1 few:3 saha:2 simultaneously:1 divergence:2 packer:1 individual:2 cheaper:1 replaced:2 pawan:1 attempt:1 detection:5 interest:6 message:2 highly:1 severe:1 mixture:1 extreme:1 predefined:1 implication:1 bregman:1 necessary:2 stoltz:1 conduct:1 unless:1 theoretical:1 instance:8 classify:1 soft:1 predictor:2 uniform:1 too:1 scam:2 reported:2 dependency:1 varies:1 eec:2 proximal:1 xue:2 considerably:1 adaptively:3 international:2 probabilistic:5 off:3 pool:2 together:2 cesa:1 choose:2 possibly:2 worse:1 expert:4 account:2 aggressive:3 prox:1 avishek:1 diversity:1 student:1 wk:52 pooled:1 boom:1 explicitly:1 depends:3 stream:1 performed:1 closed:4 kwk:3 start:1 annotation:1 shai:2 nsv:5 contribution:1 minimize:1 variance:1 largely:1 efficiently:2 yield:1 identify:1 landmine:10 generalize:1 weak:1 bor:1 lu:1 drive:1 j6:4 classified:1 simultaneous:1 kpk:4 suffers:2 sharing:2 manual:2 email:4 attentive:1 energy:1 frequency:2 intentionally:1 naturally:2 associated:2 proof:1 dataset:6 intrinsically:1 concentrating:1 recall:1 knowledge:5 improves:1 routine:1 hashing:1 specify:2 improved:1 formulation:9 evaluated:1 though:1 box:1 smola:1 correlation:1 langford:1 receives:3 replacing:1 lack:2 defines:1 quality:2 believe:3 hal:1 dredze:1 true:4 former:1 regularization:5 hence:1 symmetric:1 leibler:1 round:4 during:3 self:4 encourages:1 auc:4 essence:1 ulti:1 ambiguous:1 biconvex:1 generalized:1 evident:3 demonstrate:2 performs:1 passive:3 ranging:1 meaning:1 image:1 novel:1 common:3 physical:1 empirically:1 volume:1 extend:3 interpretation:4 mellon:4 smoothness:1 rd:3 specification:1 entail:1 supervision:1 longer:1 alekh:1 nicolo:1 dominant:1 closest:1 own:4 recent:1 optimizing:1 belongs:1 driven:3 hxk:3 scenario:1 certain:2 binary:4 rep:1 dawson:1 gentile:1 relaxed:1 impose:1 zip:1 converge:1 paradigm:1 fernando:1 recommended:1 semi:1 ii:3 multiple:8 full:1 desirable:2 reduces:3 cross:5 long:1 host:2 equally:1 dkl:4 prediction:4 zhenzhong:1 regression:1 heterogeneous:1 liao:1 cmu:4 yeung:1 iteration:1 arxiv:2 agarwal:2 achieved:3 inboxes:1 addition:5 separately:1 addressed:2 else:1 crucial:1 pooling:1 spirit:1 effectiveness:1 integer:1 ee:1 near:1 yang:1 leverage:1 presence:2 iii:1 revealed:1 xj:5 forthcoming:1 andreas:1 idea:3 favour:1 whether:1 abernathy:1 motivated:1 bartlett:2 effort:1 sentiment:9 peter:2 action:1 useful:3 foliated:1 clear:1 amount:2 clutter:1 dit:1 http:3 outperform:1 nsf:1 notice:1 per:11 klein:2 carnegie:4 dasgupta:1 group:2 key:3 four:1 seetharaman:2 lan:1 verified:1 utilize:1 kept:1 year:1 run:1 inverse:1 almost:1 analysis5:1 spl:3 decision:2 summarizes:1 comparable:1 bound:7 shan:1 paced:3 distinguish:1 fold:1 annual:1 activity:1 adapted:1 precisely:1 constraint:2 pakdd:1 alex:1 personalized:2 aspect:1 optimality:1 kumar:1 relatively:1 transferred:1 department:1 according:1 rai:1 poor:1 instantiates:1 belonging:1 anirban:1 across:2 remain:1 beneficial:1 wi:2 joseph:1 making:1 taken:2 equation:1 resource:2 visualization:1 singer:3 end:7 available:5 parametrize:1 ofer:2 simulating:1 occurrence:2 attenberg:1 batch:14 alternative:1 weinberger:1 assumes:2 remaining:1 ensure:1 denotes:3 dirichlet:1 hinge:1 exploit:1 yoram:3 k1:1 prof:1 especially:2 build:3 initializes:1 added:1 already:1 occurs:1 foml:4 primary:1 concentration:3 said:2 affinity:2 philip:1 carbonell:3 collected:2 assuming:1 code:1 length:1 modeled:1 relationship:33 kk:11 ratio:1 balance:1 minimizing:1 setup:1 difficult:1 potentially:2 relate:1 negative:2 shiguang:1 contributed:1 bianchi:1 perform:2 allowing:1 imbalance:1 observation:2 gilles:1 datasets:9 benchmark:4 discarded:1 descent:1 ecml:1 mdredze:1 mansour:2 smoothed:3 arbitrary:1 hxj:6 rating:3 namely:2 required:2 specified:3 extensive:1 kl:1 california:1 learned:5 ultinomial:1 barcelona:1 nip:2 address:2 beyond:1 suggested:1 able:1 usually:1 below:1 pattern:1 challenge:5 cavallanti:3 including:1 max:4 interpretability:1 power:1 natural:1 treated:1 rely:1 scarce:1 representing:2 lk:2 carried:1 kj:31 prior:3 literature:2 discovery:2 review:6 epoch:1 evolve:1 relative:1 loss:18 permutation:1 interesting:1 filtering:3 deyu:1 validation:2 incurred:2 degree:1 thresholding:1 share:4 row:4 supported:1 last:1 free:1 infeasible:1 dis:2 allow:1 landmines:1 perceptron:2 absolute:1 feedback:1 boundary:1 calculated:2 curve:1 cumulative:1 rich:2 world:1 default:1 author:1 giovanni:1 adaptive:7 yudin:1 spam:22 transaction:1 kullback:1 unreliable:1 global:2 active:1 shwartz:2 alternatively:1 terrain:1 iterative:1 latent:1 quantifies:3 sk:2 table:4 learn:11 transfer:3 nature:2 improving:1 unavailable:2 domain:12 bollywood:1 tkdd:1 pk:35 main:1 linearly:1 motivation:2 hyperparameters:1 daume:1 repeated:1 advice:1 papaspiliopoulos:1 roc:1 fashion:1 sub:5 pereira:1 explicit:1 exponential:2 tied:1 learns:7 rk:3 theorem:5 specific:2 explored:1 r2:1 rakhlin:2 stl:3 burden:1 naively:1 workshop:1 sequential:1 effectively:1 mirror:1 pathogen:2 hauptmann:1 phd:1 keshet:1 conditioned:2 horizon:2 nk:1 surprise:1 customization:1 entropy:4 smoothly:1 simply:1 likely:1 josh:1 prevents:1 kxk:1 omiros:1 scalar:1 springer:1 extracted:1 acm:2 itl:6 goal:3 formulated:1 identity:1 viewed:1 shared:8 considerable:1 change:1 hard:3 feasible:2 specifically:3 exceptionally:1 justify:1 total:1 svd:1 ya:1 ucb:1 desert:1 select:1 mark:1 support:1 latter:2 arises:1 crammer:3 brevity:2 alexander:3 bioinformatics:1 evaluate:4 argyriou:1 biography:1 correlated:1 |
6,008 | 6,435 | A Pseudo-Bayesian Algorithm for Robust PCA
Tae-Hyun Oh1
Yasuyuki Matsushita2
In So Kweon1
David Wipf3?
1
Electrical Engineering, KAIST, Daejeon, South Korea
2
Multimedia Engineering, Osaka University, Osaka, Japan
3
Microsoft Research, Beijing, China
thoh.kaist.ac.kr@gmail.com
yasumat@ist.osaka-u.ac.jp
iskweon@kaist.ac.kr
davidwip@microsoft.com
Abstract
Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers. The basic idea is to learn
a decomposition of some data matrix of interest into low rank and sparse components, the latter representing unwanted outliers. Although the resulting problem is
typically NP-hard, convex relaxations provide a computationally-expedient alternative with theoretical support. However, in practical regimes performance guarantees
break down and a variety of non-convex alternatives, including Bayesian-inspired
models, have been proposed to boost estimation quality. Unfortunately though,
without additional a priori knowledge none of these methods can significantly
expand the critical operational range such that exact principal subspace recovery is
possible. Into this mix we propose a novel pseudo-Bayesian algorithm that explicitly compensates for design weaknesses in many existing non-convex approaches
leading to state-of-the-art performance with a sound analytical foundation.
1
Introduction
It is now well-established that principal component analysis (PCA) is quite sensitive to outliers,
with even a single corrupted data element carrying the potential of grossly biasing the recovered
principal subspace. This is particularly true in many relevant applications that rely heavily on lowdimensional representations [8, 13, 27, 33, 22]. Mathematically, such outliers can be described by
the measurement model Y = Z + E, where Y ? Rn?m is an observed data matrix, Z = AB> is a
low-rank component with principal subspace equal to span[A], and E is a matrix of unknown sparse
corruptions with arbitrary amplitudes.
Ideally, we would like to remove the effects of E, which would then allow regular PCA to be applied
to Z for obtaining principal components devoid of unwanted bias. For this purpose, robust PCA
(RPCA) algorithms have recently been motivated by the optimization problem
minZ,E max(n, m) ? rank[Z] + kEk0 s.t. Y = Z + E,
(1)
where k ? k0 denotes the `0 matrix norm (meaning the number of nonzero matrix elements) and the
max(n, m) multiplier ensures that both rank and sparsity terms scale between 0 and nm, reflecting
a priori agnosticism about their relative contributions to Y. The basic idea is that if {Z? , E? }
minimizes (1), then Z? is likely to represent the original uncorrupted data.
As a point of reference, if we somehow knew a priori which elements of E were zero (i.e., no gross
corruptions), then (1) could be effectively reduced to the much simpler matrix completion (MC)
problem [5]
minZ rank[Z] s.t. yij = zij , ?(i, j) ? ?,
(2)
where ? denotes the set of indices corresponding with zero-valued elements in E. A major challenge
with RPCA is that an accurate estimate of the support set ? can be elusive.
?
This work was done while the first author was an intern at Microsoft Research, Beijing. The first and third authors were supported by
the NRF of Korea grant funded by the Korea government, MSIP (No. 2010-0028680). The second author was partly supported by JSPS
KAKENHI Grant Number JP16H01732.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Unfortunately, solving (1) is non-convex, discontinuous, and NP-hard in general. Therefore, the
convex surrogate referred to as principal component pursuit (PCP)
p
minZ,E max(n, m) ? kZk? + kEk1 s.t. Y = Z + E
(3)
is often adopted, where k ? k? denotes the nuclear norm and k ? k1 is the `1 matrix norm. These
represent the tightest convex relaxations of the rank and `0 norm functions respectively. Several
theoretical results quantify technical conditions whereby the solutions of (1) and (3) are actually
equivalent [4, 6]. However, these conditions are highly restrictive and do not provably hold in
practical situations of interest such as face clustering [10], motion segmentation [10], high dynamic
range imaging [22] or background subtraction [4]. Moreover, both the nuclear and `1 norms are
sensitive to data variances, often over-shrinking large singular values of Z or coefficients in E [11].
All of this motivates stronger approaches to approximating (1). In Section 2 we review existing
alternatives, including both non-convex and probabilistic approaches; however, we argue that none
of these can significantly outperform PCP in terms of principal subspace recovery in important,
representative experimental settings devoid of prior knowledge (e.g., true signal distributions, outlier
locations, rank, etc.). We then derive a new pseudo-Bayesian algorithm in Section 3 that has been
tailored to conform with principled overarching design criteria. By ?pseudo?, we mean an algorithm
inspired by Bayesian modeling conventions, but with special modifications that deviate from the
original probabilistic script for reasons related to estimation quality and computational efficiency.
Next, Section 4 examines relevant theoretical properties, explicitly accounting for all approximations
involved, while Section 5 provides empirical validations. Proofs and other technical details are
deferred to [23]. Our high-level contributions can be summarized as follows:
- We derive a new pseudo-Bayesian RPCA algorithm with efficient ADMM subroutine.
- While provable recovery guarantees are absent for non-convex RPCA algorithms, we nonetheless
quantify how our pseudo-Bayesian design choices lead to a desirable energy landscape. In particular,
we show that although any outlier support pattern will represent an inescapable local minima of (1)
(or a broad class of functions that mimic (1)), our proposal can simultaneously retain the correct
global optimum while eradicating at least some of the suboptimal minima associated with incorrect
outlier location estimates.
- We empirically demonstrate improved performance over state-of-the-art algorithms (including
PCP) in terms of standard phase transition plots with a dramatically expanded success region. Quite
surprisingly, our algorithm can even outperform convex matrix completion (MC) despite the fact
that the latter is provided with perfect knowledge of which entries are not corrupted, suggesting
that robust outlier support pattern estimation is indeed directly facilitated by our model.
2
Recent Work
The vast majority of algorithms for solving (1) either implicitly or explicitly attempt to solve a
problem of the form
P
minZ,E f1 (Z) + i,j f2 (eij ) s.t. Y = Z + E,
(4)
where f1 and f2 are penalty functions that favor minimal rank and sparsity respectively. When f1
is the nuclear norm (scaled appropriately) and f2 (e)=|e|, then (4) reduces to (3). Methods differ
however by replacing f1 and f2 with non-convex alternatives, such as generalized Huber functions
[7] or Schatten `p quasi-norms with p < 1 [18, 19]. When applied to the singular values of Z and
elements of E respectively, these selections enact stronger enforcement of minimal rank and sparsity.
If prior knowledge of the true rank of Z is available, a truncated nuclear norm approach (TNN-RPCA)
has also been proposed [24]. Further divergences follow from the spectrum of optimization schemes
applied to different objectives, such as the alternating directions method of multipliers (ADMM)
algorithm [3] or iteratively reweighted least squares (IRLS) [18].
With all of these methods, we may consider relaxing the strict equality constraint to the regularized
form
P
minZ,E ?1 kY ? Z ? Ek2F + f1 (Z) + i,j f2 (eij ),
(5)
where ? > 0 is a trade-off parameter. This has inspired a number of competing Bayesian formulations,
which typically proceed as follows. Let
1
p(Y|Z, E) ? exp ? 2?
kY ? Z ? Ek2F
(6)
2
define a likelihood function, where ? represents a non-negative variance parameter assumed to be
known.2 Hierarchical prior distributions are then assigned to Z and E to encourage minimal rank and
strong sparsity, respectively. For the latter, the most common choice is the Gaussian scale-mixture
(GSM) defined hierarchically by
h 2 i
Q
e
?1
1?a
) ? ?ij
exp[ ??b
],
p(E|?) = i,j p(eij |?ij ), p(eij |?ij ) ? exp ? 2?ijij , with hyper prior p(?ij
ij
(7)
where ? is a matrix of non-negative variances and a, b?0 are fixed parameters. Note that when
these values are small, the resulting distribution over each eij (obtained by marginalizing over the
respective ?ij ) is heavy-tailed with a sharp peak at zero, the defining characteristics of sparse priors.
For the prior on Z, Bayesian methods have somewhat broader distinctions. In particular, a number of
methods explicitly assume that Z=AB> and specify GSM priors on A and B [1, 9, 15, 30]. For ex
ample, variational Bayesian RPCA (VB-RPCA) [1] assumes p(A|?)?exp ?tr A diag[?]?1 A> ,
where ? is a non-negative variance vector. An equivalent
Q prior is used for p(B|?) with a shared
value of ?. This model also applies the prior p(?) = i p(?i ) with p(?i ) defined for consistency
?1
with p(?ij
) in (7). Low rank solutions are favored via the same mechanism as described above for
sparsity, but only the sparse variance prior is applied to columns of A and B, effectively pruning
them from the model if the associated ?i is small. Given the above, the joint distribution is
p(Y, A, B, E, ?, ?) = p(Y|A, B, E)p(E|?)p(A|?)p(B|?)p(?)p(?).
(8)
Full Bayesian inference with this is intractable, hence a common variational Bayesian (VB) meanfield approximation is applied [1, 2]. The basic idea is to obtain a tractable approximate factorial
posterior distribution by solving
minq(?) KL [q(?)||p(A, B, E, ?, ?|Y)] ,
(9)
where q(?) , q(A)q(B)q(E)q(?)q(?), each q represents an arbitrary probability distribution, and
KL[?||?] denotes the Kullback-Leibler divergence between two distributions. This can be accomplished via coordinate descent minimization over each respective q distribution while holding the
others fixed. Final estimates of Z and E are obtained by the means of q(A), q(B), and q(E) upon
convergence. A related hierarchical model is used in [9, 30], but MCMC sampling techniques are
used for full Bayesian inference RPCA (FB-RPCA) at the expense of considerable computational
complexity and multiple tuning parameters.
An alternative empirical Bayesian algorithm (EB-RPCA) is described in [31]. In addition to the
likelihood function (6) and prior from (7), this method assumes a direct Gaussian prior on Z given by
p(Z|?) ? exp ? 12 tr Z> ??1 Z ,
(10)
where ? is a symmetric and positive definite matrix.3 Inference is accomplished via an empirical
Bayesian approach [20]. The basic idea is to marginalize out the unknown Z and E and solve
RR
max?,?
p(Y|Z, E)p(Z|?)p(E|?)dZdE
(11)
using an EM-like algorithm. Once we have an optimal {?? , ?? }, we then compute the posterior
mean of p(Z, E|Y, ?? , ?? ) which is available in closed-form.
Finally, a recent class of methods has been derived around the concept of approximate message
passing, AMP-RPCA [26], which applies Gaussian priors to the factors A and B and infers posterior
estimates by loopy belief propagation [21]. In our experiments (see [23]) we found AMP-RPCA to
be quite sensitive to data deviating from these distributions.
3
A New Pseudo-Bayesian Algorithm
As it turns out, it is quite difficult to derive a fully Bayesian model, or some tight variational/empirical
approximation, that leads to an efficient algorithm capable of consistently outperforming the original
convex PCP, at least in the absence of additional, exploitable prior knowledge. It is here that we adopt
2
Actually many methods attempt to learn this parameter from data, but we avoid this consideration for simplicity.
As well, for subtle reasons such learning is sometimes not even identifiable in the strict statistical sense.
3
Note that in [31] this method is motivated from an entirely different variational perspective anchored in convex
analysis; however, the cost function that ultimately emerges is equivalent to what follows with these priors.
3
a pseudo-Bayesian approach, by which we mean that a Bayesian-inspired cost function will be altered
using manipulations that, although not consistent with any original Bayesian model, nonetheless
produce desirable attributes relevant to blindly solving (1). In some sense however, we view this as a
strength, because the final model analysis presented later in Section 4 does not rely on any presumed
validity of the underlying prior assumptions, but rather on explicit properties of the objective that
emerges, including all assumptions and approximation involved.
Basic Model: We begin with the same likelihood function from (6), noting that in the limit as ? ? 0
this will enforce the constraint set from (1). We also adopt the same prior on E given by (7) above
and used in [1] and [31], but we need not assume any additional hyperprior on ?. In contrast, for the
prior on Z our method diverges, and we define the Gaussian
i
h
?1
(12)
p(Z|?r , ?c ) ? exp ? 21 ~z > (?r ? I + I ? ?c ) ~z ,
where ~z ,vec[Z] is the column-wise vectorization of Z, ? denotes the Kronecker product, and
?c ?Rn?n and ?r ?Rm?m are positive semi-definite, symmetric matrices.4 Here ?c can be viewed
as applying a column-wise covariance factor, and ?r a row-wise one. Note that if ?r =0, then this
prior collapses to (10); however, by including ?r we can retain symmetry in our model, or invariance
to inference using either Y or Y> . Related priors can also be used to improve the performance of
affine rank minimization problems [34].
We apply the empirical Bayesian procedure from (11); the resulting convolution of Gaussians integral [2] can be computed in closed-form. After applying ?2 log[?] transformation, this is equivalent
to minimizing
? + ?I, (13)
L(?r , ?c , ?) = ~y > ??1 ~y + log |?y |,
where ?y , ?r ? I + I ? ?c + ?
y
? , diag[~
and ?
? ]. Note that for even reasonably sized problems ?y ? Rnm?nm will be huge, and
consequently we will require certain approximations to produce affordable update rules. Fortunately
this can be accomplished while simultaneously retaining a principled objective function capable of
outperforming existing methods.
Pseudo-Bayesian Objective: We first modify (13) to give
P
P
L(?r , ?c , ?) = ~y > ??1
y + j log ?c + 12 ??j + ?2 I + i log ?r + 21 ?i? + ?2 I ,
(14)
y ~
where ??j ,diag[? ?j ] and ? ?j represents the j-th column of ?. Similarly we define ?i? ,diag[? i? ]
with ? i? the i-th row of ?. This new cost is nothing more than (13) but with the log | ? | term
split in half producing a lower bound by Jensen?s inequality; the Kronecker product can naturally
be dissolved under these conditions. Additionally, (14) represents a departure from our original
Bayesian model in that there is no longer any direct empirical Bayesian or VB formulation that
would lead to (14). Note that although this modification cannot be justified on strictly probabilistic
terms, we will see shortly that it nonetheless still represents a viable cost function in the abstract
sense, and lends itself to increased computational efficiency. The latter is an immediate effect of
the drastically reduced dimensionality of the matrices inside the determinant. Henceforth (14) will
represent the cost function that we seek to minimize; relevant properties will be handled in Section 4.
We emphasize that all subsequent analysis is based directly upon (14), and therefore already accounts
for the approximation step in advancing from (13). This is unlike other Bayesian model justifications
relying on the legitimacy of the original full model, and yet then adopt various approximations that
may completely change the problem.
Update Rules: Common to many empirical Bayesian and VB approaches, our basic optimization strategy involves iteratively optimizing upper bounds on (14) in the spirit of majorizationminimization [12]. At a high level, our goal will be to apply bounds which separate ?c , ?r , and
? into terms of the general form log |X| + tr[AX?1 ], the reason being that this expression has a
simple global minimum over X given by X=A. Therefore the strategy will be to update the bound
(parameterized by some matrix A), and then update the parameters of interest X.
Using standard conjugate duality relationships and variational bounding techniques [14][Chapter 4],
it follows after some linear algebra that
4
Technically the Kronecker sum ?r ?I + I??c must be positive definite for the inverse in (12) to be defined.
However, we can accommodate the semi-definite case using the following convention. Without loss of
generality assume that ?r ?I + I??c = RR> for some matrix R. We then qualify that p(Z|?r , ?c ) = 0
if ~z ?
/ span[R], and p(Z|?r , ?c ) ? exp[? 21 ~z > (R> )? R?~z ] otherwise.
4
~y > ??1
y
y ~
?
1
kY
?
? Z ? Ek2F +
e2
ij
i,j ?ij
P
+ ~z > (?r ? I + I ? ?c )?1 ~z
(15)
for all Z and E. For fixed values of ?r , ?c , and ? we optimize this quadratic bound to obtain revised
estimates for Z and E, noting that exact equality in (15) is possible via the closed-form solution
~z = (?r ? I + I ? ?c ) ??1
y,
y ~
? ?1
~e = ??
y.
y ~
(16)
In large practical problems, (16) may become expensive to compute directly because of the high
dimensional inverse involved. However, we may still find the optimum efficiently by an ADMM
procedure described in [23].
We can also further bound the righthand side of (15) using Jensen?s inequality as
h
i
> ?1
~z > (?r ? I + I ? ?c )?1 ~z ? tr Z> Z??1
.
r + ZZ ?c
(17)
Along with (15) this implies that for fixed values of Z and E we can obtain an upper bound which
only depends on ?r , ?c , and ? in a decoupled or separable fashion.
For the log | ? | terms in (14), we also derive convenient upper bounds using determinant identities and
a first-order approximation, the goal being to find a representation that plays well with the previous
decoupled bound for optimization purposes. Again using conjugate duality relationships, we can
form the bound
log ?c + 12 ??j + ?2 I
? log |?c | + log |??j | + log |W (?c , ??j )|
? log |?c | + log |??j | + tr
h
?1
j
(? ?1 )> ?c
?
i
c
+ (?c ?1 )> ? ?1
?j +C,
?
?j
(18)
where the inverse ? ?1
?j is understood to apply element-wise, and W (?c , ??j ) is defined as
W (?c , ??j ) ,
1
2?
?
? 2I
2I
2I
I
+
??1
c
0
0
??1
?j
.
(19)
Additionally, C is a standard constant, which accompanies the first-order approximation to guarantee
that the upper bound is tangent to the underlying cost function; however, its exact value is irrelevant
for optimization purposes. Finally, the requisite gradients are defined as
?c??1 ,
?j
?W (?c ,??j )
???1
?j
?j??1 ,
= diag[??j ? 12 ??j (Sjc )?1 ??j ],
c
?W (?c ,??j )
?1
? ?c
= ?c ? ?c (Sjc )?1 ?c ,
(20)
where Sjc , ?c + 21 ??j + ?2 I. Analogous bounds can be derived for the log ?r + 21 ?i? + ?2 I terms
in (14).
These bounds are principally useful because all ?c , ?r , ??j , and ?i? factors have been decoupled.
Consequently, with Z, E, and all the relevant gradients fixed, we can separately combine ?c -, ?r -,
and ?-dependent terms from the bounds and then optimize independently. For example, combining
terms from (17) and (18) involving ?c for all j, this requires solving
hP
i
j
> ?1
> ?1
min m log |?c | + tr
(?
)
?
+
ZZ
?
.
(21)
?1
c
c
j
?
?c
c
Analogous cost functions emerge for ?r and ?. All three problems have closed-form optimal
solutions given by
hP
i
hP
i
j>
1
1
>
i>
>
~ c + u~r ,
?c = m
?
+
ZZ
,
?
=
?
+
Z
Z
, ~? = ~z 2 + u
(22)
r
j
i ??1
n
??1
r
c
~ c , [?c??1 ; . . . ; ?c??1 ], and analogously
where the squaring operator is applied element-wise to ~z , u
?1
?m
1
~ r . One interesting aspect of (22) is that it forces ?c m
for u
ZZ> and ?r n1 Z> Z, thus
maintaining a balancing symmetry and preventing one or the other from possibly converging towards
zero. This is another desirable consequence of using the bound in (17). To finalize then, the proposed
pipeline, which we henceforth refer to as pseudo-Bayesian RPCA (PB-RPCA), involves the steps
shown under Algorithm 1 in [23]. These can be implemented in such a way that the complexity is
linear in max(n, m) and cubic in min(n, m).
5
4
Analysis of the PB-RPCA Objective
On the surface it may appear that the PB-RPCA objective (14) represents a rather circuitous route
to solving (1), with no obvious advantage over the convex PCP relaxation from (3), or any other
approach for that matter. However quite surprisingly, we prove in [23] that by simply replacing
the log | ? | matrix operators in (14) with tr[?], the resulting function collapses exactly to convex
PCP. So what at first appear as distant cousins are actually quite closely related objectives. Of
course our work is still in front of us to explain why log | ? |, and therefore the PB-RPCA objective
by association, might display any particular advantage. This leads us to considerations of relative
concavity, non-separability, and symmetry as described below in turn.
Relative Concavity: Although both log | ? | and tr[?] are concave non-decreasing functions of the
singular values of symmetric positive definite matrices, and hence favor both sparsity of ? and
minimal rank of ?r or ?c , the former is far more strongly concave (in the sense of relative concavity
described in [25]). In this respect we may expect that log | ? | is less likely to over-shrink large values
[11]. Moreover, applying a concave non-decreasing penalty to elements of ? favors a sparse estimate,
? in (16).
which in turn transfers this sparsity directly to E by virtue of the left multiplication by ?
Likewise for the singular values of ?c and ?r .
Non-Separability: While potentially desirable, the relative concavity distinction described above
is certainly not sufficient to motivate why PB-RPCA might represent an effective RPCA approach,
especially given the breadth of non-convex alternatives already in the literature. However, a much
stronger argument can be made by exposing a fundamental limitation of all RPCA methods (convex
or otherwise) that rely on minimization of generic penalties in the separable or additive form of (4).
For this purpose, let ? denote a set of indices that correspond with zero-valued elements in E, such
that E? = 0 while all other elements of E are arbitrary nonzeros (it can equally be viewed as the
complement of the support of E). In the case of MC, ? would also represent the set of observed
matrix elements. We then have the following:
Proposition 1. To guarantee that (4) has the same global optimum as (1) for all Y where a unique
solution exists, it follows that f1 and f2 must be non-convex and no feasible descent direction can
ever remove an index from or decrease the cardinality of ?.
In [31] it has been shown that, under similar conditions, the gradient in a feasible direction at any
zero-valued element of E must be infinite to guarantee a matching global optimum, from which
this result naturally follows. The ramifications of this proposition are profound if we ever wish to
produce a version of RPCA that can mimic the desirable behavior of much simpler MC problems
with known support, or at least radically improve upon PCP with unknown outlier support. In words,
Proposition 1 implies that under the stated global-optimality preserving conditions, if any element of
E converges to zero during optimization with an arbitrary descent algorithm, it will remain anchored
at zero until the end. Consequently, if the algorithm prematurely errs in setting the wrong element
to zero, meaning the wrong support pattern has been inferred at any time during an optimization
trajectory, it is impossible to ever recover, a problem naturally side-stepped by MC where the support
is effectively known. Therefore, the adoption of separable penalty functions can be quite constraining
and they are unlikely to produce sufficiently reliable support recovery.
But how does this relate to PB-RPCA? Our algorithm maintains a decidedly non-separable
penalty function on ?c , ?r , and ?, which directly transfers to an implicit, non-separable regularizer over Z and E when viewed through the dual-space framework from [32].5 By this we
mean a penalty
f (Z, E)6=f1 (Z)+f2 (E) for any functions f1 and f2 , and with Z fixed, we have
P
f (Z, E)6= i,j fij (eij ) for any set of functions {fij }.
We now examine the consequences. Let ? now denote a set of indices that correspond with zerovalued elements in ?, which translates into an equivalent support set for Z via (16). This then leads
to quantifiable benefits:
Proposition 2. The following properties hold w.r.t. the PB-RPCA objective (assuming n = m for
simplicity):
? Assume that a unique global solution to (1) exists such that either rank[Z]+maxj ke?j k0 <n or
rank[Z]+maxi kei? k0 <n. Additionally, let {??c , ??r , ?? } denote a globally minimizing solution to
(14) and {Z? ,E? } the corresponding values of Z and E computed using (16). Then in the limit ??0,
Z? and E? globally minimize (1).
5
Even though this penalty function is not available in closed-form, non-separability is nonetheless enforced via
the linkage between ?c , ?r , and ? in the log | ? | operator.
6
1
0.6
0.6
0.6
0.5
0.6
0.2
Outlier ratio
0.4
0.4
0.2
0.4
0.2
Outlier ratio
0.2
Outlier ratio
Outlier ratio
Outlier ratio
0.8
0.4
0.4
0.6
0.3
0.4
0.2
0.2
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.05
0.1
0.15
Rank ratio
0.3
0.35
0.4
0.05
0.1
0.15
[Known outlier location]
0.2
0.3
0.35
0.4
0.05
0.1
0.05
0.4
0.2
0.15 0.150.2 0.2 0.25
0.1
0.25
0.3
0.3
0.35 0.40.4
0.35
0
Rank
Rank
ratioratio
(d) PB?RPCA w/o sym.
1
0.6
0.5
0.6
0.8
Outlier ratio
0.4
0.25
(c) VB?RPCA
[Known rank]
0.6
0.2
Rank ratio
(b) IRLS?RPCA
Outlier ratio
Outlier ratio
0.25
Rank ratio
(a) CVX?PCP
0.6
0.2
Outlier ratio
0.1
0.4
0.2
0.4
0.2
Outlier ratio
0.05
0.4
0.6
0.3
0.4
0.2
0.2
0.1
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Rank ratio
(e) CVX?MC
0.4
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.05
Rank ratio
0.1
0.15
0.2
0.25
0.3
Rank ratio
(f) TNN?RPCA
(g) FB?RPCA
0.35
0.4
0.05
0.1
0.05
0.15 0.150.2 0.2 0.25
0.1
0.25
0.3
0.3
0.35 0.40.4
0.35
0
Rank
Rank
ratioratio
(h) PB?RPCA (Proposed)
Figure 1: Phase transition over outlier (y-axis) and rank (x-axis) ratio variations. Here CVX-MC and
TNN-RPCA maintain advantages of exactly known outlier support pattern and true rank respectively.
? Assume that Y has no entries identically equal to zero.6 Then for any arbitrary ?, there will always
exist a range of ?c and ?r values such that for any ? consistent with ? we are not at a locally
minimizing solution to (14), meaning there exists a feasible descent direction whereby elements of ?
can escape from zero.
A couple important comments are worth stating regarding this result. First, the rank and row/columnsparsity requirements are extremely mild. In fact, any minimum of (1) will be such that rank[Z] +
maxj ke?j k0 ? n and rank[Z] + maxi kei? k0 ? m, regardless of Y. Secondly, unlike any separable
penalty function (4) that retains the correct global optimal as (1), Proposition 2 implies that (14)
need not be locally minimized by every possible support pattern for outlier locations. Consequently,
premature convergence to suboptimal supports need not disrupt trajectories towards the global solution
to the extent that (4) may be obstructed. Moreover, beyond algorithms that explicitly adopt separable
penalties (the vast majority), some existing Bayesian approaches may implicitly default to (4). For
example, as shown in [23], the mean-field factorizations adopted by VB-RPCA actually allow the
underlying free energy objective to be expressible as (4) for some f1 and f2 .
Symmetry: Without the introduction of symmetry via our pseudo-Bayesian proposal (meaning either
?c or ?r is forced to zero), then PB-RPCA collapses to something like EB-RPCA, which depends
heavily on whether Y or Y> is provided as input and penalizes column- and row-spaces asymmetrically. In this regime it can be shown that the analogous requirement to replicate Proposition 2 becomes
more stringent, namely we must assume the asymmetric condition rank[Z] + maxj ke?j k0 < n. Thus
the symmetric cost of PB-RPCA of allows us to relax this column-wise restriction provided a rowwise alternative holds (and vice versa), allowing the PB-RPCA objective (14) to match the global
optimum of our original problem from (1) under broader conditions.
In closing this section, we reiterate that all of our analysis and conclusions are based on (14), after
the stated approximations. Therefore we need not rely on the plausibility of the original Bayesian
starting point from Section 3 nor the tightness of subsequent approximations for justification; rather
(14) can be viewed as a principled stand-alone objective for RPCA regardless of its origins. Moreover,
it represents the first approach satisfying the relative concavity, non-separability, and symmetry
properties described above, which can loosely be viewed as necessary, but not sufficient design
criteria for an optimal RPCA objective.
5
Experiments
To examine significant factors that influence the ability to solve (1), we first evaluate the relative
performance of PB-RPCA estimating random simulated subspaces from corrupted measurements,
the standard benchmark. Later we present subspace clustering results for motion segmentation as a
practical application. Additional experiments and a photometric stereo example are provided in [23].
Phase Transition Graphs: We compare our method against existing RPCA methods: PCP [16],
TNN [24], IRLS [18], VB [1], and FB [9]. We also include results using PB-RPCA but with symmetry
removed (which then defaults to something like EB-RPCA), allowing us to isolate the importance of
this factor, called ?PB-RPCA w/o sym.?. For competing algorithms, we set parameters based on the
values suggested by original authors with the exception of IRLS. Detailed settings and parameters
can be found in [23].
6
This assumption can be relaxed with some additional effort but we avoid such considerations here for clarity of
presentation.
7
?
Success Rate
1
0.8
0.6
0.4
PB-RPCA (easy case)
PB-RPCA (hard case)
PCP (easy case)
PCP (hard case)
0.2
0
0
0.2
0.4
0.6
Outlier Ratio
SSC
Robust SSC
PCP+SSC
PB+SSC (Ours)
Without sub-sampling (large number of measurements)
19.0 / 14.9
5.3 / 0.3
3.0 / 0.0
2.4 / 0.0
28.2 / 28.3
6.4 / 0.4
3.0 / 0.0
2.4 / 0.0
33.2 / 34.7
7.2 / 0.5
3.6 / 0.2
2.8 / 0.0
36.5 / 39.0
8.5 / 0.6
4.7 / 0.2
3.1 / 0.0
With sub-sampling (small number of measurements)
0.1
19.5 / 17.2
4.0 / 0.0
2.9 / 0.0
2.8 / 0.0
0.2
33.0 / 33.3
5.3 / 0.0
3.7 / 0.0
3.6 / 0.0
0.3
39.3 / 41.1
5.7 / 1.7
5.0 / 0.7
3.9 / 0.0
42.2 / 43.5
6.4 / 2.1
9.8 / 5.1
3.7 / 0.0
0.4
*Values are percentage with (mean / median).
0.1
0.2
0.3
0.4
0.8
Figure 2: Hard case comparison.
1
Figure 3: Motion segmentation errors on Hopkins155.
We construct phase transition plots as in [4, 9] that evaluate the recovery success of every pairing of
outlier ratio and rank using data Y=ZGT +EGT , where Y?Rm?n and m=n=200. The ground truth
outlier matrix EGT is generated by selecting non-zero entries uniformly with probability ??[0,1], and
its magnitudes are sampled iid from the uniform distribution U [?20, 20]. We generate the ground
truth low-rank matrix by ZGT =AB> , where A?Rn?r and B?Rm?r are drawn from iid N (0,1).
Figure 1 shows comparisons among competing methods, as well as the convex nuclear norm based
matrix completion (CVX-MC) [5], the latter representing a far easier estimation task given that
missing entry locations (analogous to corruptions) occur in known locations. The color of each cell
encodes the percentage of success trials (out of 10 total) whereby the normalized root-mean-squared
?
GT kF
error (NRMSE, kZ?Z
) recovering ZGT is less than 0.001 to classify success following [4, 9].
kZGT kF
Notably PB-RPCA displays a much broader recoverability region. This improvement is even
maintained over TNN-RPCA and MC which require prior knowledge such as the true rank and
exact outlier locations respectively. These forms of prior knowledge offer a substantial advantage,
although in practical situations are usually unavailable. PB-RPCA also outperforms PB-RPCA w/o
sym. (its closest relative) by a wide margin, suggesting that the symmetry plays an important role.
The poor performance of FB-RPCA is explained in [23].
Hard Case Comparison: Recovery of Gaussian iid low-rank components (the typical benchmark
recovery problem in the literature) is somewhat ideal for existing algorithms like PCP because the
singular vectors of ZGT will not resemble unit vectors that could be mistaken for sparse components.
However, a simple test reveals just how brittle PCP is to deviations from the theoretically optimal
regime. We generate a rank one ZGT = ?a3 (b3 )> , where the cube operation is applied element-wise,
a and b are vectors drawn iid from a unit sphere, and ? scales ZGT to unit variance. EGT has nonzero
elements drawn iid from U [?1, 1]. Figure 2 shows the recovery results as the outlier ratio is increased.
The hard case refers to the data just described, while the easy case follows the model used to make
the phase transition plots. While PB-RPCA is quite stable, PCP completely fails for the hard data.
Outlier Removal for Motion Segmentation: Under an affine camera model, the stacked matrix
consisting of feature point trajectories of k rigidly moving objects forms a union of k affine subspaces
of at most rank 4k [29]. But in practice, mismatches often occur due to occlusions or tracking
algorithm limitations, and these introduce significant outliers into the feature motions such that the
corresponding trajectory matrix may be at or near full rank. We adopt an experimental paradigm
from [17] designed to test motion segmentation estimation in the presence of outliers. To mimic
mismatches while retaining access to ground-truth, we randomly corrupt the entries of the trajectory
matrix formed from Hopkins155 data [28]. Specifically, following [17] we add noise drawn from
N (0, 0.1?) to randomly sampled points with outlier ratio ??[0, 1], where ? is the maximum absolute
value of the data. We may then attempt to recover a clean version from the corrupted measurements
using RPCA as a preprocessing step; motion segmentation can then be applied using standard
subspace clustering [29]. We use SSC and robust SSC algorithms [10] as baselines, and compare
with RPCA preprocessing computed via PCP (as suggested in [10]) and PB-RPCA followed by SSC.
Additionally, we sub-sampled the trajectory matrix to increase problem difficulty by fewer samples.
Segmentation accuracy is reported in Fig. 3, where we observe that PB shows the best performance
across different outlier ratios, and the performance gap widens when the measurements are scarce.
6
Conclusion
Since the introduction of convex RPCA algorithms, there has not been a significant algorithmic
break-through in terms of dramatically enhancing the regime where success is possible, at least in the
absence of any prior information (beyond the generic low-rank and sparsity assumptions). The likely
explanation is that essentially all of these approaches solve either a problem in the form of (4), an
asymmetric problem in the form of (11), or else require strong priori knowledge. We provide a novel
integration of three important design criteria, concavity, non-separability, and symmetry, that leads to
state-of-the-art results by a wide margin without tuning parameters or prior knowledge.
8
References
[1] S. D. Babacan, M. Luessi, R. Molina, and A. K. Katsaggelos. Sparse Bayesian methods for low-rank
matrix estimation. IEEE Trans. Signal Process., 2012.
[2] C. M. Bishop. Pattern recognition and machine learning. Springer New York, 2006.
[3] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
R in Machine Learning, 2011.
via the alternating direction method of multipliers. Foundations and Trends
[4] E. J. Cand?s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. of the ACM, 2011.
[5] E. J. Cand?s and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational
mathematics, 2009.
[6] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix
decomposition. SIAM J. on Optim., 2011.
[7] R. Chartrand. Nonconvex splitting for regularized low-rank+ sparse decomposition. IEEE Trans. Signal
Process., 2012.
[8] Y.-L. Chen and C.-T. Hsu. A generalized low-rank appearance model for spatio-temporally correlated rain
streaks. In IEEE Int. Conf. Comput. Vis., 2013.
[9] X. Ding, L. He, and L. Carin. Bayesian robust principal component analysis. IEEE Trans. Image Process.,
2011.
[10] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. IEEE Trans.
Pattern Anal. and Mach. Intell., 2013.
[11] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am.
Stat. Assoc., 2001.
[12] D. R. Hunter and K. Lange. A tutorial on MM algorithms. The American Statistician, 2004.
[13] H. Ji, C. Liu, Z. Shen, and Y. Xu. Robust video denoising using low rank matrix completion. In IEEE
Conf. Comput. Vis. and Pattern Recognit., 2010.
[14] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Mach. Learn., 1999.
[15] B. Lakshminarayanan, G. Bouchard, and C. Archambeau. Robust Bayesian matrix factorisation. In
AISTATS, 2011.
[16] Z. Lin, M. Chen, and Y. Ma. The augmented Lagrange multiplier method for exact recovery of corrupted
low-rank matrices. arXiv:1009.5055, 2010.
[17] G. Liu and S. Yan. Latent low-rank representation for subspace segmentation and feature extraction. In
IEEE Int. Conf. Comput. Vis., 2011.
[18] C. Lu, Z. Lin, and S. Yan. Smoothed low rank and sparse matrix recovery by iteratively reweighted least
squares minimization. IEEE Trans. Image Process., 2015.
[19] K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn.
Res., 2012.
[20] K. P. Murphy. Machine Learning: a Probabilistic Perspective. MIT Press, 2012.
[21] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy belief propagation for approximate inference: An
empirical study. In UAI, 1999.
[22] T.-H. Oh, J.-Y. Lee, Y.-W. Tai, and I. S. Kweon. Robust high dynamic range imaging by rank minimization.
IEEE Trans. Pattern Anal. and Mach. Intell., 2015.
[23] T.-H. Oh, Y. Matsushita, I. S. Kweon, and D. Wipf. Pseudo-Bayesian robust PCA: Algorithms and analyses.
arXiv preprint arXiv:1512.02188, 2015.
[24] T.-H. Oh, Y.-W. Tai, J.-C. Bazin, H. Kim, and I. S. Kweon. Partial sum minimization of singular values in
Robust PCA: Algorithm and applications. IEEE Trans. Pattern Anal. and Mach. Intell., 2016.
[25] J. A. Palmer. Relative convexity. ECE Dept., UCSD, Tech. Rep, 2003.
[26] J. T. Parker, P. Schniter, and V. Cevher. Bilinear generalized approximate message passing.
arXiv:1310.2632, 2013.
[27] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma. RASL: Robust alignment by sparse and low-rank
decomposition for linearly correlated images. IEEE Trans. Pattern Anal. and Mach. Intell., 2012.
[28] R. Tron and R. Vidal. A benchmark for the comparison of 3-d motion segmentation algorithms. In IEEE
Conf. Comput. Vis. and Pattern Recognit., 2007.
[29] R. Vidal. Subspace clustering. IEEE Signal Process. Mag., 2011.
[30] N. Wang and D.-Y. Yeung. Bayesian robust matrix factorization for image and video processing. In IEEE
Int. Conf. Comput. Vis., 2013.
[31] D. Wipf. Non-convex rank minimization via an empirical Bayesian approach. In UAI, 2012.
[32] D. Wipf, B. D. Rao, and S. Nagarajan. Latent variable Bayesian models for promoting sparsity. IEEE
Trans. on Information Theory, 2011.
[33] L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma. Robust photometric stereo via low-rank
matrix completion and recovery. In Asian Conf. Comput. Vis., 2010.
[34] B. Xin and D. Wipf. Pushing the limits of affine rank minimization by adapting probabilistic PCA. In Int.
Conf. Mach. Learn., 2015.
9
| 6435 |@word mild:1 trial:1 determinant:2 version:2 norm:9 stronger:3 replicate:1 seek:1 accounting:1 covariance:1 decomposition:4 tr:8 accommodate:1 liu:2 selecting:1 zij:1 mag:1 egt:3 kweon:3 ours:1 amp:2 outperforms:1 existing:6 recovered:1 optim:1 com:2 yet:1 gmail:1 pcp:16 chu:1 must:4 exposing:1 distant:1 subsequent:2 additive:1 remove:2 designed:1 plot:3 update:4 alone:1 half:1 fewer:1 provides:1 location:7 simpler:2 along:1 direct:2 become:1 profound:1 viable:1 pairing:1 incorrect:1 prove:1 combine:1 circuitous:1 inside:1 introduce:1 theoretically:1 peng:1 huber:1 indeed:1 presumed:1 notably:1 behavior:1 examine:2 nor:1 cand:2 inspired:4 globally:2 relying:1 decreasing:2 cardinality:1 becomes:1 spain:1 provided:4 underlying:3 begin:1 estimating:1 moreover:4 what:2 minimizes:1 transformation:1 guarantee:5 pseudo:12 every:2 concave:3 unwanted:2 exactly:2 wrong:2 assoc:1 scaled:1 rm:3 unit:3 grant:2 appear:2 producing:1 positive:4 engineering:2 local:1 modify:1 understood:1 limit:3 consequence:2 despite:1 bilinear:1 mach:7 rigidly:1 incoherence:1 might:2 eb:3 china:1 relaxing:1 archambeau:1 collapse:3 factorization:2 palmer:1 range:4 adoption:1 fazel:1 unique:2 camera:1 practical:5 agnosticism:1 union:1 practice:1 definite:5 procedure:2 empirical:9 yan:2 significantly:2 adapting:1 convenient:1 matching:1 boyd:1 word:1 regular:1 refers:1 cannot:1 marginalize:1 selection:2 operator:3 applying:3 impossible:1 influence:1 optimize:2 equivalent:5 restriction:1 missing:1 shi:1 elusive:1 regardless:2 starting:1 independently:1 minq:1 overarching:1 ke:3 shen:1 simplicity:2 recovery:11 convex:21 splitting:1 sjc:3 factorisation:1 examines:1 rule:2 nuclear:5 osaka:3 oh:3 coordinate:1 justification:2 analogous:4 variation:1 play:2 heavily:2 exact:6 origin:1 element:18 trend:1 satisfying:1 recognition:1 expensive:1 particularly:1 asymmetric:2 observed:2 role:1 preprint:1 ding:1 electrical:1 wang:2 region:2 ensures:1 trade:1 decrease:1 removed:1 gross:1 principled:3 substantial:1 convexity:1 complexity:2 ideally:1 dynamic:2 ultimately:1 motivate:1 carrying:1 solving:6 tight:1 algebra:1 technically:1 upon:3 efficiency:2 f2:9 completely:2 expedient:1 joint:1 k0:6 various:1 chapter:1 regularizer:1 stacked:1 forced:1 effective:1 recognit:2 hyper:1 quite:8 valued:3 kaist:3 solve:4 relax:1 otherwise:2 tightness:1 compensates:1 favor:3 ability:1 legitimacy:1 itself:1 final:2 advantage:4 rr:2 analytical:1 propose:1 lowdimensional:1 product:2 relevant:5 combining:1 ramification:1 ky:3 quantifiable:1 convergence:2 optimum:5 requirement:2 diverges:1 produce:4 perfect:1 converges:1 object:1 derive:4 ac:3 completion:6 stating:1 stat:1 ij:9 ex:1 strong:2 implemented:1 recovering:1 resemble:1 implies:3 involves:2 convention:2 quantify:2 direction:5 differ:1 fij:2 closely:1 discontinuous:1 correct:2 attribute:1 stringent:1 require:3 government:1 nagarajan:1 f1:9 proposition:6 secondly:1 mathematically:1 yij:1 strictly:1 hold:3 mm:1 around:1 sufficiently:1 ground:3 wright:2 exp:7 algorithmic:2 major:1 adopt:5 purpose:4 estimation:6 rpca:54 sensitive:3 vice:1 minimization:9 mit:1 gaussian:5 always:1 rather:3 avoid:2 jaakkola:1 broader:3 derived:2 ax:1 kakenhi:1 consistently:1 rank:55 improvement:1 likelihood:4 tech:1 contrast:1 kim:1 sense:4 baseline:1 am:1 inference:5 dependent:1 squaring:1 typically:2 unlikely:1 expressible:1 quasi:1 subroutine:1 expand:1 provably:1 among:1 dual:1 priori:4 retaining:2 katsaggelos:1 favored:1 art:3 integration:1 special:1 cube:1 field:1 construct:1 equal:2 once:1 ijij:1 sampling:3 extraction:1 zz:4 represents:8 nrf:1 broad:1 carin:1 wipf:4 mimic:3 sanghavi:1 photometric:2 np:2 minimized:1 others:1 escape:1 randomly:2 simultaneously:2 divergence:2 intell:4 asian:1 deviating:1 maxj:3 murphy:2 phase:5 consisting:1 occlusion:1 statistician:1 microsoft:3 n1:1 ab:3 maintain:1 attempt:4 huge:1 interest:3 message:2 highly:1 righthand:1 certainly:1 alignment:1 weakness:1 deferred:1 mixture:1 accurate:1 integral:1 encourage:1 partial:1 schniter:1 necessary:1 capable:2 korea:3 respective:2 decoupled:3 loosely:1 hyperprior:1 penalizes:1 re:1 theoretical:3 minimal:4 cevher:1 increased:2 classify:1 column:6 modeling:1 rao:1 retains:1 loopy:2 cost:8 deviation:1 entry:5 uniform:1 jsps:1 front:1 reported:1 corrupted:5 recht:1 devoid:2 peak:1 siam:1 sensitivity:1 fundamental:1 retain:2 lee:1 probabilistic:5 off:1 analogously:1 again:1 squared:1 nm:2 possibly:1 ssc:7 henceforth:2 conf:7 american:1 leading:1 li:2 japan:1 suggesting:2 potential:1 parrilo:1 account:1 summarized:1 lakshminarayanan:1 coefficient:1 matter:1 int:4 mcmc:1 explicitly:5 depends:2 vi:6 reiterate:1 script:1 view:1 break:2 closed:5 root:1 msip:1 later:2 recover:2 maintains:1 hopkins155:2 bouchard:1 contribution:2 minimize:2 square:2 formed:1 accuracy:1 variance:6 characteristic:1 efficiently:1 likewise:1 correspond:2 landscape:1 chartrand:1 bayesian:37 irls:4 iid:5 none:2 hunter:1 trajectory:6 finalize:1 mc:9 corruption:3 lu:1 worth:1 explain:1 gsm:2 against:1 grossly:1 energy:2 nonetheless:4 involved:3 obvious:1 e2:1 naturally:3 proof:1 associated:2 couple:1 sampled:3 hsu:1 color:1 knowledge:9 dimensionality:1 infers:1 emerges:2 subtle:1 amplitude:1 segmentation:9 actually:4 reflecting:1 follow:1 specify:1 improved:1 wei:1 formulation:2 obstructed:1 done:1 shrink:1 strongly:1 generality:1 though:2 implicit:1 just:2 until:1 replacing:2 ganesh:2 inescapable:1 propagation:2 somehow:1 quality:2 b3:1 effect:2 validity:1 normalized:1 true:5 concept:1 multiplier:4 former:1 hence:2 assigned:1 equality:2 alternating:2 symmetric:4 nonzero:2 iteratively:3 leibler:1 reweighted:3 during:2 maintained:1 whereby:3 criterion:3 generalized:3 demonstrate:1 tron:1 motion:8 meaning:4 variational:6 wise:7 novel:2 recently:1 parikh:1 image:4 common:3 consideration:3 empirically:1 ji:1 jp:1 association:1 he:1 refer:1 significant:3 measurement:6 versa:1 vec:1 tuning:2 mistaken:1 consistency:1 mathematics:1 similarly:1 hp:3 closing:1 funded:1 moving:1 stable:1 access:1 longer:1 surface:1 etc:1 add:1 gt:1 something:2 closest:1 posterior:3 recent:2 perspective:2 optimizing:1 irrelevant:1 manipulation:1 route:1 certain:1 nonconvex:1 inequality:2 outperforming:2 errs:1 rep:1 success:6 qualify:1 accomplished:3 uncorrupted:1 molina:1 preserving:1 minimum:4 fortunately:1 additional:5 somewhat:2 relaxed:1 subtraction:1 paradigm:1 signal:4 semi:2 multiple:1 sound:1 desirable:5 reduces:1 nonzeros:1 full:4 mix:1 technical:2 match:1 plausibility:1 offer:1 sphere:1 lin:2 dept:1 equally:1 converging:1 involving:1 basic:6 essentially:1 enhancing:1 yeung:1 blindly:1 arxiv:4 represent:6 tailored:1 sometimes:1 affordable:1 cell:1 proposal:2 addition:1 justified:1 separately:1 background:1 else:1 singular:6 median:1 appropriately:1 unlike:2 strict:2 comment:1 isolate:1 south:1 ample:1 nonconcave:1 spirit:1 jordan:2 near:1 noting:2 presence:1 constraining:1 ideal:1 split:1 easy:3 identically:1 variety:1 kek0:1 competing:3 suboptimal:2 reduce:1 idea:4 regarding:1 lange:1 translates:1 cousin:1 absent:1 whether:1 expression:1 pca:9 handled:1 motivated:2 linkage:1 effort:1 penalty:9 stereo:2 accompanies:1 proceed:1 york:1 passing:2 dramatically:2 useful:1 detailed:1 factorial:1 locally:2 reduced:2 generate:2 outperform:2 exist:1 percentage:2 rowwise:1 tutorial:1 oh1:1 conform:1 nrmse:1 ist:1 rnm:1 pb:24 drawn:4 clarity:1 clean:1 breadth:1 advancing:1 imaging:2 graph:1 vast:2 relaxation:3 sum:2 beijing:2 enforced:1 inverse:3 facilitated:1 parameterized:1 chandrasekaran:1 wu:1 cvx:4 vb:7 entirely:1 bound:15 matsushita:2 followed:1 display:2 fan:1 quadratic:1 identifiable:1 oracle:1 strength:1 occur:2 constraint:2 kronecker:3 encodes:1 aspect:1 argument:1 min:2 span:2 optimality:1 extremely:1 expanded:1 babacan:1 separable:7 poor:1 conjugate:2 remain:1 across:1 em:1 separability:5 modification:2 outlier:33 explained:1 principally:1 pipeline:1 computationally:1 tai:2 turn:3 mechanism:1 enforcement:1 tractable:1 end:1 adopted:2 pursuit:1 gaussians:1 tightest:1 available:3 vidal:3 promoting:1 operation:1 apply:3 observe:1 enforce:1 generic:2 hierarchical:2 alternative:7 shortly:1 original:9 denotes:5 clustering:5 include:1 assumes:2 rain:1 graphical:1 maintaining:1 widens:1 pushing:1 restrictive:1 ghahramani:1 especially:1 k1:1 approximating:1 classical:1 objective:13 already:2 strategy:2 surrogate:1 gradient:3 lends:1 subspace:11 separate:1 schatten:1 simulated:1 majority:2 stepped:1 argue:1 extent:1 reason:3 provable:1 willsky:1 assuming:1 index:4 relationship:2 ratio:22 minimizing:3 difficult:1 unfortunately:2 potentially:1 relate:1 holding:1 expense:1 luessi:1 stated:2 negative:3 design:5 anal:4 motivates:1 unknown:3 allowing:2 upper:4 convolution:1 revised:1 benchmark:3 hyun:1 descent:4 truncated:1 immediate:1 situation:2 defining:1 ever:3 prematurely:1 rn:3 ucsd:1 smoothed:1 sharp:1 arbitrary:5 recoverability:1 peleato:1 inferred:1 david:1 complement:1 namely:1 eckstein:1 kl:2 distinction:2 established:1 barcelona:1 boost:1 nip:1 trans:9 beyond:2 suggested:2 usually:1 below:1 pattern:12 departure:1 mismatch:2 regime:4 biasing:1 challenge:1 sparsity:10 kek1:1 max:5 including:5 explanation:1 belief:2 reliable:1 video:2 meanfield:1 critical:1 difficulty:1 rely:4 force:1 regularized:2 decidedly:1 scarce:1 representing:2 scheme:1 altered:1 ek2f:3 improve:2 temporally:1 axis:2 deviate:1 prior:24 literature:2 review:1 removal:1 kf:2 multiplication:1 tangent:1 relative:9 marginalizing:1 fully:1 expect:1 loss:1 brittle:1 interesting:1 limitation:2 validation:1 foundation:3 affine:4 sufficient:2 consistent:2 corrupt:1 rasl:1 heavy:1 balancing:1 row:4 course:1 penalized:1 supported:2 surprisingly:2 majorizationminimization:1 sym:3 free:1 drastically:1 bias:1 side:2 allow:2 saul:1 wide:2 face:1 emerge:1 absolute:1 sparse:11 distributed:1 benefit:1 kzk:1 default:2 stand:1 transition:5 fb:4 concavity:6 preventing:1 commonly:1 made:1 preprocessing:2 kz:1 author:4 kei:2 far:2 premature:1 approximate:4 pruning:1 emphasize:1 implicitly:2 kullback:1 global:9 reveals:1 uai:2 assumed:1 spatio:1 knew:1 spectrum:1 disrupt:1 vectorization:1 iterative:1 latent:2 tailed:1 anchored:2 why:2 additionally:4 learn:5 reasonably:1 transfer:2 robust:16 operational:1 obtaining:1 symmetry:9 unavailable:1 streak:1 diag:5 aistats:1 hierarchically:1 linearly:1 bounding:1 noise:1 nothing:1 exploitable:1 xu:2 fig:1 augmented:1 referred:1 representative:1 parker:1 fashion:1 cubic:1 shrinking:1 fails:1 sub:3 explicit:1 wish:1 comput:6 minz:5 third:1 down:1 bishop:1 tnn:5 jensen:2 maxi:2 virtue:1 a3:1 intractable:1 exists:3 effectively:3 kr:2 importance:1 magnitude:1 mohan:1 elhamifar:1 margin:2 gap:1 easier:1 chen:2 eij:6 likely:3 simply:1 intern:1 appearance:1 lagrange:1 tracking:1 applies:2 springer:1 radically:1 truth:3 acm:1 ma:4 sized:1 identity:1 presentation:1 goal:2 viewed:5 consequently:4 towards:2 shared:1 admm:3 considerable:1 hard:8 change:1 daejeon:1 infinite:1 specifically:1 absence:2 typical:1 feasible:3 uniformly:1 denoising:1 principal:9 total:1 multimedia:1 asymmetrically:1 duality:2 partly:1 xin:1 experimental:2 invariance:1 ece:1 called:1 exception:1 support:14 latter:5 evaluate:2 requisite:1 tae:1 correlated:2 |
6,009 | 6,436 | SPALS: Fast Alternating Least Squares via Implicit
Leverage Scores Sampling
Dehua Cheng
University of Southern California
dehua.cheng@usc.edu
Ioakeim Perros
Georgia Institute of Technology
perros@gatech.edu
Richard Peng
Georgia Institute of Technology
rpeng@cc.gatech.edu
Yan Liu
University of Southern California
yanliu.cs@usc.edu
Abstract
Tensor CANDECOMP/PARAFAC (CP) decomposition is a powerful but computationally challenging tool in modern data analytics. In this paper, we show ways of
sampling intermediate steps of alternating minimization algorithms for computing
low rank tensor CP decompositions, leading to the sparse alternating least squares
(SPALS) method. Specifically, we sample the Khatri-Rao product, which arises
as an intermediate object during the iterations of alternating least squares. This
product captures the interactions between different tensor modes, and form the
main computational bottleneck for solving many tensor related tasks. By exploiting
the spectral structures of the matrix Khatri-Rao product, we provide efficient access
to its statistical leverage scores. When applied to the tensor CP decomposition,
our method leads to the first algorithm that runs in sublinear time per-iteration
and approximates the output of deterministic alternating least squares algorithms.
Empirical evaluations of this approach show significant speedups over existing
randomized and deterministic routines for performing CP decomposition. On a
tensor of the size 2.4m ? 6.6m ? 92k with over 2 billion nonzeros formed by
Amazon product reviews, our routine converges in two minutes to the same error
as deterministic ALS.
1
Introduction
Tensors, a.k.a. multidimensional arrays, appear frequently in many applications, including spatialtemporal data modeling [40], signal processing [12, 14], deep learning [29] and more. Low-rank
tensor decomposition [21] is a fundamental tool for understanding and extracting the information
from tensor data, which has been actively studied in recent years. Developing scalable and provable
algorithms for most tensor processing tasks is challenging due to the non-convexity of the objective [18, 21, 16, 1]. Especially in the era of big data, scalable low-rank tensor decomposition algorithm
(that runs in nearly linear or even sublinear time in the input data size) has become an absolute must
to command the full power of tensor analytics. For instance, the Amazon review data [24] yield a
2, 440, 972 ? 6, 643, 571 ? 92, 626 tensor with 2 billion nonzero entries after preprocessing. Such
data sets pose challenges of scalability to some of the simplest tensor decomposition tasks.
There are multiple well-defined tensor ranks[21]. In this paper, we focus on the tensor CANDECOMP/PARAFAC (CP) decomposition [17, 3], where the low-rank tensor is modeled by the
summation over many rank-1 tensors. Due to its simplicity and interpretability, tensor CP decomposition, which is to find the best rank-R approximation for the input tensor often by minimizing the
square loss function, has been widely adopted in many applications [21].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Matrix Khatri-Rao (KRP) product captures the interactions between different tensor modes in the
CP decomposition, and it is essential for understanding many tensor related tasks. For instance, in
the alternating least square (ALS) algorithm, which has been the workhorse for solving the tensor
CP decomposition problem, a compact representation of the KRP can reduce the computational
cost directly. ALS is a simple and parameter-free algorithm that optimizes the target rank-R tensor
by updating its factor matrices in the block coordinate descent fashion. In each iteration, the
computational bottleneck is to solve a least square regression problem, where the size of the design
matrix, a KRP of factor matrices, is n2 ? n for an n ? n ? n tensor. While least square regression is
one of the most studied problem, solving it exactly requires at least O(n2 ) operations [23], which
can be larger than the size of input data for sparse tensors. For instance, the amazon review data
with 2 ? 109 nonzeros leads to a computational cost on the order of 1012 per iteration. Exploiting
the structure of the KRP can reduce this cost to be linear in the input size, which on large-scale
applications is still expensive for an iterative algorithm.
An effective way for speeding up such numerical computations is through randomization [23, 38],
where the computational cost can be uncorrelated with the ambient size of the input data in the
optimal case. By exploring the connection between the spectral structures of the design matrix as the
KRP of the factor matrices, we provide efficient access to the statistical leverage score of the design
matrix. It allows us to propose the SPALS algorithm that samples rows of the KRP in a nearly-optimal
manner. This near optimality is twofold: 1) the estimates of leverage scores that we use have many
tight cases; 2) the operation of sampling a row can be efficiently performed. The latter requirement is
far from trivial: Note that even when the optimal sampling probability is given, drawing a sample
may require O(n2 ) preprocessing. Our result on the spectral structures of the design matrix allows us
to achieve both criteria simultaneously, leading to the first sublinear-per-iteration cost ALS algorithm
with provable approximation guarantees. Our contributions can be summarized as follows:
1. We show a close connection between the statistical leverage scores of the matrix Khatri-Rao
product and the scores of the input matrices. This yields efficient and accurate leverage
score estimations for importance sampling;
2. Our algorithm achieves the state-of-art computational efficiency, while approximating the
ALS algorithm provably for computing CP tensor decompositions. The running time of
3
?
each iteration of our algorithm is O(nR
), sublinear in the input size for large tensors.
3. Our theoretical results on the spectral structure of KRP can also be applied on other tensor
related applications such as stochastic gradient descent [26] and high-order singular value
decompositions (HOSVD) [13].
We formalize the definitions in Section 2 and present our main results on leverage score estimation of
the KRP in Section 3. The SPALS algorithm and its theoretical analysis are presented in Section 4.
We discuss connections with previous works in Section 5. In Section 6, we empirical evaluate this
algorithm and its variants on both synthetic and real world data. And we conclude and discuss our
work in Section 7.
2
Notation and Background
Vectors are represented by boldface lowercase letters, such as, a, b, c; Matrices are represented by
boldface capital letters, such as, A, B, C; Tensors are represented by boldface calligraphic capital
letters, such as, T . Without loss of generality, in this paper we focus our discussion for the 3-mode
tensors, but our results and algorithm can be easily generalized to higher-order tensors.
The ith entry of a vector is denoted by ai , element (i, j) of a matrix A is denoted by Aij , and the
element (i, j, k) of a tensor T 2 RI?J?K is denoted by Tijk . For notation simplicity, we assume
that (i, j) also represents the index i + Ij between 1 and IJ, where the value I and J should be clear
from the context.
qP
I,J,K
2
For a tensor T 2 RI?J?K , we denote the tensor norm as kT k, i.e., kT k =
i,j,k=1 Tijk .
Special Matrix Products Our manipulation of tensors as matrices revolves around several matrix
products. Our main focus is the matrix Khatri-Rao product (KRP) , where for a pair of matrices
A 2 RI?R and B 2 RJ?R , A B 2 R(IJ)?R has element ((i, j), r) as Air Bjr .
2
We also utilize the matrix Kronecker product ? and the elementwise matrix product ?. More details
on these products can be found in Appendix A and [21].
Tensor Matricization Here we consider only the case of mode-n matricization. For n = 1, 2, 3, the
mode-n matricization of a tensor T 2 RI?J?K is denoted by T(n) . For instance, T(3) 2 RK?IJ ,
where the element (k, (i, j)) is Tijk .
Tensor CP Decomposition The tensor CP decomposition [17, 3] expresses a tensor as the sum of a
number of rank-one tensors, e.g.,
T =
R
X
ar
br
cr ,
r=1
where denotes the outer product, T 2 RI?J?K and ar 2 RI , br 2 RJ , and cr 2 RK for
r = 1, 2, . . . , R. Tensor CP decomposition will be compactly represented using JA, B, CK, where
the factor matrices A 2 RI?R ,B 2 RJ?R and C 2 RK?R and ar , br , cr are their r-th column
PR
respectively, i.e., JA, B, CKijk =
r=1 Air Bjr Ckr . Similar as in the matrix case, each rank-1
component is usually interpreted as a hidden factor, which captures the interactions between all
dimensions in the simplest way.
Given a tensor T 2 RI?J?K along with target rank R, the goal is to find a rank-R tensor specified
by its factor matrices A 2 RI?R , B 2 RJ?R , C 2 RK?R , that is as close to T as possible:
!2
R
X
X
2
min kT JA, B, CKk =
T i,j,k
Air Bjr Ckr .
A,B,C
r=1
i,j,k
Alternating Least Squares Algorithm A widely used method for performing CP decomposition is
alternating least squares (ALS) algorithm. It iteratively minimizes one of the factor matrices with the
others fixed. For instance, when the factors A and B are fixed, algebraic manipulations suggest that
the best choice of C can be obtained by solving the least squares regression:
min XC>
C
where the design matrix X = B
3
T>
(3)
2
,
(1)
A is the KRP of A and B, and T(3) is the matricization of T [21].
Near-optimal Leverage Score Estimation for Khatri-Rao Product
As shown in Section 2, the matrix KRP captures the essential interactions between the factor matrices
in the tensor CP decomposition. This task is challenging because the size of KRP of two matrices is
significantly larger than the input matrices. For example, for the amazon review data, the KRP of two
factor matrices contains 1012 rows, which is much larger than the data set itself with 109 nonzeros.
Importance sampling is one of the most powerful tools for obtaining sample efficient randomized
data reductions with strong guarantees. However, effective implementation requires comprehensive
knowledge on the objects to be sampled: the KRP of factor matrices. In this section, we provide
an efficient and effective toolset for estimating the statistical leverage scores of the KRP of factor
matrices, giving a direct way of applying importance sampling, one of the most important tools in
randomized matrix algorithms, for tensor CP decomposition related applications.
In the remainder of this section, we first define and discuss the optimal importance: statistical
leverage score, in the context of `2 -regression. Then we propose and prove our near-optimal leverage
score estimation routine.
3.1
Leverage Score Sampling for `2 -regression
It is known that, when p ? n, subsampling the rows of design matrix X 2 Rn?p by its statistical
leverage score and solving on the samples provides efficient approximate solution to the least square
2
regression problem: min kX
yk2 , with strong theoretical guarantees [23].
3
Definition 3.1 (Statistical Leverage Score). Given an n ? r matrix X, with n > r, let U denote the
n ? r matrix consisting of the top-r left singular vectors of X. Then, the quantity
2
?i = kUi,: k2 ,
where Ui,: denotes the i-th row of U, is the statistical leverage score of the i-th row of X.
The statistical leverage score of a certain row captures importance of the row in forming the linear
subspace. Its optimality in solving `2 -regression can be explained by the subspace projection nature
of linear regression.
It does not yield an efficient algorithm for the optimization problem in Equation (1) due to the
difficulties of computing statistical leverage scores. But this reduction to the matrix setting allows
for speedups using a variety of tools. In particular, sketching [6, 25, 27] or iterative sampling [22, 9]
lead to routines that run in input sparsity time: O(nnz) plus the cost of solving an O(r log n) sized
least squares problem. However, directly applying these methods still require at least one pass over
T at each iteration, which will dominate the overall cost.
3.2
Near-optimal Leverage Score Estimation
As discussed in the previous section, the KRPs of factor matrices capture the interaction between
two modes in the tensor CP decomposition, e.g., the design matrix B A in the linear regression
problem. To extract a compact representation of the interaction, the statistical leverage scores of
B A provide an informative distribution over the rows, which can be utilized to select the important
subsets of rows randomly.
For a matrix with IJ rows in total, e.g., B A, in general, the calculation of statistical leverage
score is prohibitively expensive. However, due to the special structure of the KRP B A, the upper
bound of statistical leverage score, which is sufficient to obtain the same guarantee by using slightly
more samples, can be efficiently estimated, as shown in Theorem 3.2.
Theorem 3.2 (Khatri-Rao Bound). For matrix A 2 RI?R and matrix B 2 RJ?R , where I > R
and J > R, let ?iA and ?jB be the statistical leverage score of the i-th and j-th row of A and B,
A B
respectively. Then, for statistical leverage score of the (iJ + j)-th row of matrix A B, ?i,j
, we
have
A?B
?i,j
? ?iA ?jB .
>
Proof. Let the singular value decomposition of A and B be A = Ua ?a Va > and B = Ub ?b Vb ,
where Ua 2 RI?R , Ub 2 RJ?R , and ?a , ?b , Va , Vb 2 RR?R .
By the definition of Khatri-Rao product, we have that
A
B = [A:,1 ? B:,1 , . . . , A:,R ? B:,R ] 2 RIJ?R ,
where ? is the Kronecker product. By the form of SVD and Lemma B.1, we have
a >
b >
a
b
B =[Ua ?a (V1,:
) ? Ub ?b (V1,:
) , . . . , Ua ?a (VR,:
)> ? Ub ?b (VR,:
)> ]
?
h
i
h
i
?
? h
i
h
i?
>
>
= (Ua ?a ) ? (Ub ?b ) Va> Vb
= Ua ? Ub ?a ? ?b Va> Vb
= Ua ? Ub S,
?
??
?
2
>
where S = ?a ? ?b Va> Vb
2 RR ?R . So the SVD of A B can be constructed using
A
the SVD of S = Us ?s Vs> . So the leverage score of A
H = [Ua ?
Ub ] Us U>
s
and for the index k = iJ + j, we have
A
?i,j
B
= Hk,k = e>
k Hek ?
=
h
Ua ? Ub
R X
R
X
i>
B can be computed from [Ua ? Ub ] Us :
>
(2)
[Ua ? Ub ] ,
2
(3)
ek
2
2
(Uai,p )2 (Ubj,q ) = (
p=1 q=1
R
X
p=1
(Uai,p )2 )(
R
X
2
(Ubj,q ) ) = ?iA ?jB ,
(4)
q=1
>
where ei is the i-th natural basis vector. The first inequality is because H 4 [Ua ? Ub ] [Ua ? Ub ] .
4
Algorithm 1 Sample a row from B
A and T(3) .
Draw a Bernoulli random variable z ? Bernoulli( ).
if z = 0 then
Draw i ? Multi(?1A /R, . . . , ?IA /R) and j ? Multi(?1B /R, . . . , ?JB /R).
else
Draw a entry (i, j, k) from the nonzero entries with probability proportional to T 2i,j,k .
end if
Return the (jI + i)-th row of B A and T(3) with weight IJpi,j .
For the rank-R CP decomposition, the sum of the leverage scores for all rows in B A equals R.
? 2 ) samples instead
The sum of our upper bound relaxes it to R2 , which means that now we need O(R
?
of O(R). This result directly generalizes to the Khatri-Rao product of k-dimensional tensors. The
proof is provided in Appendix C.
(k)
Theorem 3.3. For matrices A(k) 2 RIk ?R where Ik > R for k = 1, . . . , K, let ?i be the statistical
Q
leverage score of the i-th row of A(k) . Then, for the k Ik -by-R matrix A(1) A(2) ? ? ? A(K)
with statistical leverage score ?i1 ,...,iK for the row corresponding to ?i1 ,...,iK , we have
K
Y
(k)
?i1:K
?
?ik ,
1 ,...,iK
k=1
where
denotes the statistical leverage score of the row of A(1) A(2) ? ? ? A(K)
corresponding to the ik -th row of A(k) for k = 1, . . . , K.
Our estimation enables the development of efficient numerical algorithms and is nearly optimal in
three ways:
1. The estimation can be calculated in sublinear time given that max{I, J, K} = o (nnz(T )).
For instance, for the amazon review data, we have max{I, J, K} ? 106 ? nnz(T ) ? 109 ;
2. The form of the estimation allows efficient sample-drawing. In fact, the row index can be
drawn efficiently by considering each mode independently;
3. The estimation is tight up to a constant factor R. And R is considered as modest constant
for low-rank decomposition. Therefore, the estimation allows sample-efficient importance
sampling.
?i1:K
1 ,...,iK
4
SPALS: Sampling Alternating Least Squares
The direct application of our results on KRP leverage score estimation is an efficient version of the
ALS algorithm for tensor CP decomposition, where the computational bottleneck is to solve the
optimization problem 1.
Our main algorithmic result is a way to obtain a high quality O(r2 log n) row sample of X without
explicitly constructing the matrix X. This is motivated by a recent work that implicitly generates
sparsifiers for multistep random walks [4]. In particular, we sample the rows of X, the KRP of A and
B, using products of quantities computed on the corresponding rows in A and B, which provides
a rank-1 approximation to the optimal importance: the statistical leverage scores. This leads to a
sublinear time sampling routine, and implies that we can approximate the progress of each ALS step
linear in the size of the factor being updated, which can be sublinear in the number of non-zeros in T .
In the remainder of this section, we present our algorithm SPALS and prove its approximation
guarantee. We will also discuss its extension to other tensor related applications.
4.1
Sampling Alternating Least Squares
The optimal solution to optimization problem (1) is
?
?
C = T(3) (B A) A> A ? B> B
1
.
?
? 1
We separate the calculation into two parts: (1) T(3) (B A), and (2) A> A ? B> B
, where
? denotes the elementwise matrix product. The latter is to invert the gram matrix of the Khatri-Rao
5
product, which can also be efficiently computed due to its R ? R size. We will mostly focus on
evaluating the former expression.
We perform the matrix multiplication by drawing a few rows from both T>
(3) and B A and construct
the final solution from the subset of rows. The row of B A can be indexed by (i, j) for i = 1, . . . , I
and j = 1, . . . , J, which correspond to the i-th and j-th row in A and B, respectively. That is, our
sampling problem can be seen as to sample the entries of a I ? J matrix P = {pi,j }i,j .
We define the sampling probability pi,j as follows,
pi,j = (1
?iA ?jB
)
+
R2
PK
k=1
T 2i,j,k
2
kT k
(5)
.
where 2 (0, 1). The first term is a rank-1 component for matrix P. And when the input tensor is
sparse, the second term is sparse, thus admitting the sparse plus low rank structure, which can be
easily sampled as the mixture of two simple distributions. The sampling algorithm is described in
Algorithm 1. Note that sampling by the leverage scores of the design matrix B A alone provides a
guaranteed but worse approximation for each step [23]. Since that the design matrix itself is formed
by two factor matrices, i.e., we are not directly utilizing the information in the data, we design the
second term for the worst case scenario.
When R ? n and n ? nnz(T ), where n = max(I, J, K), we can afford to calculate ?iA and ?jB
exactly in each iteration. So the distribution corresponding to the first term can be efficiently sampled
? 2 n + r3 ) and per-sample-cost O(log n). Note that the second term requires
with preparation cost O(r
a one-time O(nnz(T )) preprocessing before the first iteration.
4.2
Approximation Guarantees
We define the following conditions:
C1. The sampling probability pi,j satisfies pi,j
C2. The sampling probability pi,j satisfies pi,j
1
2
A B
?i,j
R
PK
for some constant
2
k=1 T i,j,k
kT k2
1;
for some constant
2;
The proposed probabilities pi,j in Equation (5) satisfy both conditions with 1 = (1
)/R and
2 = . We can now prove our main approximation result.
Theorem 4.1. For a tensor T 2 RI?J?K with n = max(I, J, K) and any factor matrices on the
first two dimension as A 2 RI?R and B 2 RJ?R . If a step of ALS on the third dimension gives Copt ,
then a step of SPALS that samples m = ?(R2 log n/?2 ) rows produces C satisfying
q
y 2
2
2
T
A, B, C
< kT JA, B, Copt Kk + ?kT k .
Proof. Denote the sample-and-rescale matrix as S 2 Rm?IJ . By Corollary E.3, we have that
T(3) (B A) T(3) S> S (B A) ? ?kT k. Together with Lemma E.1, we can conclude.
Note that the approximation error of our algorithm does not accumulate over iterations. Similar to the
stochastic gradient descent algorithm, the error occurred in the previous iterations can be addressed
in the subsequent iterations.
4.3
Extensions on Other Tensor Related Applications
Importance Sampling SGD on CP Decompostion We can incorporate importance sampling in the
stochastic gradient descent algorithm for CP decomposition. The gradient follows the form
@
2
kT JA, B, CKk = T(3) (B A) .
@C
By sampling rows according to proposed distribution, it reduces the per-step variance via importance
sampling [26]. Our result addresses the computational difficulty of finding the appropriate importance.
Sampling ALS on Higher-Order Singular Value Decomposition (HOSVD) For solving the
HOSVD [13] on tensor, the Kronecker product is involved instead of the Khatri-Rao product. In
Appendix D, we prove similar leverage score approximation results for Kronecker product. In fact,
for Kronecker product, our ?approximation? provides the exact leverage score.
6
Theorem 4.2. For matrix A 2 RI?M and matrix B 2 RJ?N , where I > M and J > N , let ?iA
and ?jB be the statistical leverage score of the i-th and j-th row of A and B, respectively. Then, for
A?B
matrix A ? B 2 RIJ?M N with statistical leverage score ?i,j
for the (iJ + j)-th row, we have
A?B
A B
?i,j
= ? i ?j .
5
Related Works
CP decomposition is one of the simplest, most easily-interpretable tensor decomposition. Fitting it
in an ALS fashion is still considered as the state-of-art in the recent tensor analytics literature [37].
The most widely used implementation of ALS is the MATLAB Tensor Toolbox [21]. It directly
performs the analytic solution of ALS steps. There is a line of work on speeding up this procedure in
distributed/parallel/MapReduce settings [20, 19, 5, 33]. Such approaches are compatible with our
approach, as we directly reduce the number of steps by sampling. A similar connection holds for
works achieving more efficient computation of KRP steps of the ALS algorithm such as in [32].
The applicability of randomized numerical linear algebra tools to tensors was studied during their
development [28]. Within the context of sampling-based tensor decomposition, early work has been
published in [36, 35] that focuses though on Tucker decomposition. In [30], sampling is used as
a means of extracting small representative sub-tensors out of the initial input, which are further
decomposed via the standard ALS and carefully merged to form the output. Another work based
on an a-priori sampling of the input tensor can be found in [2]. However, recent developments in
randomized numerical linear algebra often focused on over-constrained regression problems or low
rank matrices. The incorporation of such tools into tensor analytics routines was fairly recent [31, 37]
Most closely related to our algorithm are the routines from [37], which gave a sketch-based CP
decomposition inspired by the earlier work in [31]. Both approaches only need to examine the
factorization at each iteration, followed by a number of updates that only depends on rank. A main
difference is that the sketches in [37] moves the non-zeroes, while our sampling approach removes
many entries instead. Their algorithm also performs a subsequent FFT step, while our routine always
works on subsets of the matricizations. Our method is much more suitable for sparse tensors. Also,
our routine can be considered as data dependent randomization, which enjoys better approximation
accuracy than [37] in the worst case.
For direct comparison, the method in [37] and ours both require nnz(T ) preprocessing at the
3
?
beginning. Then, for each iteration, our method requires O(nr
) operations comparing with O(r(n +
Bb log b) + r3 ) for [37]. Here B and b for [37] are parameters for the sketching and need to be tuned
for various applications. Depending on the target accuracy, b can be as large as the input size: on the
cube synthetic tensors with n = 103 that the experiments in [37] focused on, b was set to between
214 ? ?103 and 216 ? 6 ? 104 in order to converge to good relative errors.
From a distance, our method can be viewed as incorporating randomization into the intermediate steps
of algorithms, and can be viewed as higher dimensional analogs of weighted SGD algorithms [39].
Compared to more global uses of randomization [38], these more piecemeal invocations have several
advantages. For high dimensional tensors, sketching methods need to preserve all dimensions, while
the intermediate problems only involve matrices, and can often be reduced to smaller dimensions.
For approximating a rank R tensor in d dimensions to error ?, this represents the difference between
d
poly(R, ?) and R? . Furthermore, the lower cost of each step of alternate minimization makes it much
easier to increase accuracy at the last few steps, leading to algorithms that behave the same way in
the limit. The wealth of works on reducing sizes of matrices while preserving objectives such as `p
norms, hinge losses, and M-estimators [11, 10, 8, 7] also suggest that this approach can be directly
adapted to much wider ranges of settings and objectives.
6
Experimental Results
We implemented and evaluated our algorithms in a single machine setting. The source code is
available online1 . Experiments are tested on a single machine with two Intel Xeon E5-2630 v3 CPU
and 256GB memory. All methods are implemented in C++ with OpenMP parallelization. We report
averages from 5 trials.
1
https://github.com/dehuacheng/SpAls
7
Dense Synthetic Tensors We start by comparing our method against the sketching based algorithm
from [37] in the single thread setting as in their evaluation. The synthetic data we tested are thirdorder tensors with dimension n = 1000, as described in [37]. We generated a rank-1000 tensor
with harmonically decreasing weights on rank-1 components. And then after normalization, random
Gaussian noise with noise-to-signal nsr = 0.1, 1, 10 was added. As with previous experimental
evaluations [37], we set target rank to r = 10. The performances are given in Table 1a. We vary the
sampling rate of our algorithm, i.e., SPALS(?) will sample ?r2 log2 n rows at each iteration.
ALS-dense
sketch(20, 14)
sketch(40, 16)
ALS-sparse
SPALS(0.3)
SPALS(1)
SPALS(3.0)
nsr = 0.1
error time
0.27 64.8
0.45 6.50
0.30 16.0
0.24 501
0.20 1.76
0.18 5.79
0.21 15.9
nsr
error
1.08
1.37
1.13
1.09
1.14
1.10
1.09
=1
time
66.2
4.70
12.7
512
1.93
5.64
16.1
nsr = 10
error time
10.08 67.6
11.11 4.90
10.27 12.4
10.15 498
10.40 1.92
10.21 5.94
10.15 16.16
(a) Running times per iterations in seconds and errors of various
alternating least squares implementations
ALS-sparse
SPALS(0.3)
SPALS(1)
SPALS(3.0)
error
0.981
0.987
0.983
0.982
time
142
6.97
15.7
38.9
(b) Relative error and running
times per iteration on the Amazon review tensor with dimensions
2.44e6 ? 6.64e6 ? 9.26e4 and
2.02 billion non-zeros
On these instances, a call to SPALS with rate ? samples was about 4.77??103 rows, and as the tensor
is dense, 4.77? ? 106 entries. The correspondence between running times and rates demonstrate
the sublinear runtimes of SPALS with low sampling rates. Comparing with the [37], our algorithm
employs data dependent random sketch with minimal overhead, which yields significantly better
precision with similar amount of computation.
Sparse Data Tensor Our original motivation for SPALS was to handle large sparse data tensors. We
ran our algorithm on a large-scale tensor generated from Amazon review data [24]. Its sizes and
convergences of SPALS with various parameters are in Table 1b. We conduct the experiments in
parallel with 16 threads. The Amazon data tensor has a much higher noise to signal ratio than our
other experiments which common for large-scale data tensors: Running deterministic ALS with rank
10 on it leads to a relative error of 98.1%. SPALS converges rapidly towards a good approximation
with only a small fraction of time comparing with the ALS algorithm.
7
Discussion
Our experiments show that SPALS provides notable speedup over previous CP decomposition routines
on both dense and sparse data. There are two main sources of speedups: (1) the low target rank and
moderate individual dimensions enable us to compute leverage scores efficiently; and (2) the simple
representations of the sampled form also allows us to use mostly code from existing ALS routines with
minimal computational overhead. It is worth noting that in the dense case, the total number of entries
accessed during all 20 iterations is far fewer than the size of T . Nonetheless, the adaptive nature
of the sampling scheme means all the information from T are taken into account while generating
the first and subsequent iterations. From a randomized algorithms perspective, the sub-linear time
sampling steps bear strong resemblances with stochastic optimization routines [34]. We believe more
systematically investigating such connections can lead to more direct connections between tensors
and randomized numerical linear algebra, and in turn further algorithmic improvements.
Acknowledgments
This work is supported in part by the U. S. Army Research Office under grant number W911NF-15-10491, NSF Research Grant IIS-1254206 and IIS-1134990. The views and conclusions are those of
the authors and should not be interpreted as representing the official policies of the funding agency,
or the U.S. Government.
References
[1] B. Barak, J. A. Kelner, and D. Steurer. Dictionary learning and tensor decomposition via the sum-of-squares
method. In STOC, 2015.
[2] S. Bhojanapalli and S. Sanghavi. A New Sampling Technique for Tensors. ArXiv e-prints, 2015.
[3] J. D. Carroll and J.-J. Chang. Analysis of individual differences in multidimensional scaling via an n-way
generalization of ?eckart-young? decomposition. Psychometrika, 1970.
[4] D. Cheng, Y. Cheng, Y. Liu, R. Peng, and S.-H. Teng. Spectral sparsification of random-walk matrix
polynomials. arXiv preprint arXiv:1502.03496, 2015.
[5] J. H. Choi and S. Vishwanathan. Dfacto: Distributed factorization of tensors. In NIPS, 2014.
8
[6] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In
STOC, 2013.
[7] K. L. Clarkson and D. P. Woodruff. Input sparsity and hardness for robust subspace approximation. In
FOCS, 2015.
[8] K. L. Clarkson and D. P. Woodruff. Sketching for m-estimators: A unified approach to robust regression.
In SODA, 2015.
[9] M. B. Cohen, Y. T. Lee, C. Musco, C. Musco, R. Peng, and A. Sidford. Uniform sampling for matrix
approximation. In ITCS, 2015.
[10] M. B. Cohen and R. Peng. `p row sampling by Lewis weights. In STOC, 2015.
[11] A. Dasgupta, P. Drineas, B. Harb, R. Kumar, and M. W. Mahoney. Sampling algorithms and coresets for
\ell_p regression. SIAM Journal on Computing, 2009.
[12] L. De Lathauwer and B. De Moor. From matrix to tensor: Multilinear algebra and signal processing. In
Institute of Mathematics and Its Applications Conference Series, 1998.
[13] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM
journal on Matrix Analysis and Applications, 2000.
[14] V. De Silva and L.-H. Lim. Tensor rank and the ill-posedness of the best low-rank approximation problem.
SIAM J. Matrix Anal. Appl., 2008.
[15] P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarl?s. Faster least squares approximation.
Numerische Mathematik, 2011.
[16] R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points - online stochastic gradient for tensor
decomposition. In COLT, 2015.
[17] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an" explanatory"
multi-modal factor analysis. 1970.
[18] C. J. Hillar and L.-H. Lim. Most tensor problems are np-hard. Journal of the ACM (JACM), 2013.
[19] I. Jeon, E. E. Papalexakis, U. Kang, and C. Faloutsos. Haten2: Billion-scale tensor decompositions. In
ICDE, 2015.
[20] U. Kang, E. Papalexakis, A. Harpale, and C. Faloutsos. Gigatensor: scaling tensor analysis up by 100
times-algorithms and discoveries. In KDD, 2012.
[21] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 2009.
[22] M. Li, G. Miller, and R. Peng. Iterative row sampling. In FOCS, 2013.
[23] M. W. Mahoney. Randomized algorithms for matrices and data. Foundations and Trends R in Machine
Learning, 2011.
[24] J. McAuley and J. Leskovec. Hidden factors and hidden topics: understanding rating dimensions with
review text. In RecSys, 2013.
[25] X. Meng and M. W. Mahoney. Low-distortion subspace embeddings in input-sparsity time and applications
to robust linear regression. In STOC, 2013.
[26] D. Needell, R. Ward, and N. Srebro. Stochastic gradient descent, weighted sampling, and the randomized
kaczmarz algorithm. In NIPS, 2014.
[27] J. Nelson and H. L. Nguy?n. Osnap: Faster numerical linear algebra algorithms via sparser subspace
embeddings. In FOCS, 2013.
[28] N. H. Nguyen, P. Drineas, and T. D. Tran. Tensor sparsification via a bound on the spectral norm of random
tensors. CoRR, 2010.
[29] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov. Tensorizing neural networks. In NIPS, 2015.
[30] E. E. Papalexakis, C. Faloutsos, and N. D. Sidiropoulos. Parcube: Sparse parallelizable tensor decompositions. In Machine Learning and Knowledge Discovery in Databases. Springer, 2012.
[31] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In KDD, 2013.
[32] A.-H. Phan, P. Tichavsky, and A. Cichocki. Fast alternating ls algorithms for high order candecomp/parafac
tensor factorizations. Signal Processing, IEEE Transactions on, 2013.
[33] S. Smith, N. Ravindran, N. D. Sidiropoulos, and G. Karypis. Splatt: Efficient and parallel sparse tensormatrix multiplication. 29th IEEE International Parallel & Distributed Processing Symposium, 2015.
[34] T. Strohmer and R. Vershynin. A randomized kaczmarz algorithm with exponential convergence. JFAA,
2009.
[35] J. Sun, S. Papadimitriou, C.-Y. Lin, N. Cao, S. Liu, and W. Qian. Multivis: Content-based social network
exploration through multi-way visual analysis. In SDM. SIAM, 2009.
[36] C. E. Tsourakakis. Mach: Fast randomized tensor decompositions. In SDM. SIAM, 2010.
[37] Y. Wang, H.-Y. Tung, A. J. Smola, and A. Anandkumar. Fast and guaranteed tensor decomposition via
sketching. In NIPS, 2015.
[38] D. P. Woodruff. Sketching as a tool for numerical linear algebra. Foundations and Trends R in Theoretical
Computer Science, 2014.
[39] J. Yang, Y. Chow, C. R?, and M. W. Mahoney. Weighted sgd for `p regression with randomized preconditioning. In SODA, 2016.
[40] R. Yu, D. Cheng, and Y. Liu. Accelerated online low rank tensor learning for multivariate spatiotemporal
streams. In ICML, pages 238?247, 2015.
9
| 6436 |@word trial:1 version:1 polynomial:2 norm:3 decomposition:41 sgd:3 mcauley:1 reduction:2 initial:1 liu:4 contains:1 score:37 series:1 woodruff:4 tuned:1 ours:1 existing:2 comparing:4 com:1 must:1 numerical:7 subsequent:3 informative:1 kdd:2 enables:1 analytic:1 remove:1 interpretable:1 update:1 v:1 alone:1 fewer:1 beginning:1 podoprikhin:1 ith:1 smith:1 provides:5 kelner:1 accessed:1 along:1 constructed:1 direct:4 become:1 c2:1 ik:8 harmonically:1 focs:3 prove:4 yuan:1 symposium:1 fitting:1 overhead:2 manner:1 peng:5 ravindran:1 hardness:1 frequently:1 examine:1 multi:4 inspired:1 decomposed:1 decreasing:1 cpu:1 considering:1 ua:13 psychometrika:1 spain:1 estimating:1 notation:2 provided:1 osnap:1 bhojanapalli:1 interpreted:2 minimizes:1 unified:1 finding:1 sparsification:2 guarantee:6 multidimensional:2 exactly:2 prohibitively:1 k2:2 rm:1 grant:2 appear:1 harshman:1 harpale:1 before:1 papalexakis:3 limit:1 era:1 vetrov:1 mach:1 meng:1 multistep:1 plus:2 studied:3 challenging:3 appl:1 revolves:1 hek:1 factorization:3 analytics:4 range:1 karypis:1 acknowledgment:1 block:1 kaczmarz:2 procedure:2 nnz:6 empirical:2 yan:1 significantly:2 projection:1 suggest:2 close:2 perros:2 context:3 applying:2 deterministic:4 map:1 hillar:1 independently:1 l:1 focused:2 musco:2 amazon:8 simplicity:2 numerische:1 needell:1 qian:1 estimator:2 array:1 utilizing:1 dominate:1 handle:1 coordinate:1 ckr:2 updated:1 kolda:1 target:5 exact:1 us:1 element:4 trend:2 expensive:2 satisfying:1 updating:1 utilized:1 database:1 preprint:1 tung:1 rij:2 capture:6 worst:2 calculate:1 wang:1 eckart:1 sun:1 ran:1 matricizations:1 convexity:1 ui:1 agency:1 thirdorder:1 solving:8 tight:2 algebra:6 efficiency:1 basis:1 preconditioning:1 compactly:1 drineas:3 easily:3 hosvd:3 represented:4 various:3 muthukrishnan:1 fast:5 effective:3 sarl:1 widely:3 solve:2 larger:3 distortion:1 drawing:3 ward:1 itself:2 final:1 online:2 advantage:1 rr:2 sdm:2 propose:2 tran:1 interaction:6 product:24 remainder:2 cao:1 rapidly:1 achieve:1 scalability:1 exploiting:2 billion:4 convergence:2 requirement:1 produce:1 generating:1 converges:2 object:2 wider:1 depending:1 novikov:1 pose:1 rescale:1 ij:9 progress:1 ckk:2 strong:3 implemented:2 c:1 implies:1 merged:1 closely:1 stochastic:6 bader:1 exploration:1 enable:1 require:3 ja:5 government:1 generalization:1 randomization:4 multilinear:2 summation:1 exploring:1 extension:2 hold:1 pham:1 around:1 considered:3 algorithmic:2 achieves:1 early:1 vary:1 dictionary:1 estimation:11 tool:8 weighted:3 moor:2 minimization:2 always:1 gaussian:1 ck:1 cr:3 command:1 gatech:2 office:1 corollary:1 parafac:4 focus:5 improvement:1 rank:29 bernoulli:2 hk:1 dependent:2 lowercase:1 explanatory:1 chow:1 hidden:3 i1:4 provably:1 overall:1 ill:1 colt:1 denoted:4 priori:1 development:3 art:2 special:2 constrained:1 fairly:1 cube:1 equal:1 construct:1 sampling:39 runtimes:1 represents:2 yu:1 icml:1 nearly:3 papadimitriou:1 sanghavi:1 np:1 others:1 jb:7 report:1 richard:1 few:2 employ:1 modern:1 randomly:1 simultaneously:1 preserve:1 comprehensive:1 individual:2 usc:2 consisting:1 jeon:1 evaluation:3 mahoney:5 mixture:1 admitting:1 strohmer:1 accurate:1 ambient:1 kt:9 decompostion:1 modest:1 indexed:1 conduct:1 walk:2 theoretical:4 leskovec:1 minimal:2 instance:7 column:1 modeling:1 earlier:1 rao:11 xeon:1 ar:3 w911nf:1 sidford:1 cost:10 applicability:1 entry:8 subset:3 uniform:1 vandewalle:1 spatiotemporal:1 synthetic:4 vershynin:1 fundamental:1 randomized:12 siam:6 sparsifiers:1 international:1 lee:1 pagh:1 together:1 sketching:7 huang:1 worse:1 ek:1 leading:3 return:1 actively:1 li:1 account:1 de:5 summarized:1 coresets:1 satisfy:1 notable:1 explicitly:1 depends:1 stream:1 performed:1 tijk:3 view:1 start:1 parallel:4 contribution:1 square:18 formed:2 air:3 accuracy:3 variance:1 efficiently:6 miller:1 yield:4 correspond:1 itcs:1 worth:1 cc:1 published:1 khatri:11 parallelizable:1 definition:3 against:1 nonetheless:1 involved:1 tucker:1 proof:3 sampled:4 knowledge:2 lim:2 formalize:1 routine:12 carefully:1 higher:4 modal:1 evaluated:1 though:1 generality:1 furthermore:1 implicit:1 smola:1 sketch:5 ei:1 mode:7 quality:1 resemblance:1 believe:1 nsr:4 former:1 alternating:12 nonzero:2 iteratively:1 during:3 criterion:1 generalized:1 demonstrate:1 workhorse:1 performs:2 cp:23 bjr:3 silva:1 dfacto:1 funding:1 common:1 qp:1 ji:1 cohen:2 discussed:1 occurred:1 approximates:1 elementwise:2 analog:1 accumulate:1 significant:1 sidiropoulos:2 ai:1 mathematics:1 harb:1 access:2 carroll:1 yk2:1 multivariate:1 recent:5 perspective:1 optimizes:1 moderate:1 manipulation:2 scenario:1 certain:1 inequality:1 calligraphic:1 seen:1 preserving:1 converge:1 v3:1 signal:5 ii:2 gigatensor:1 full:1 multiple:1 rj:8 nonzeros:3 reduces:1 faster:2 calculation:2 lin:1 va:5 scalable:3 regression:15 variant:1 arxiv:3 iteration:19 normalization:1 kernel:1 invert:1 c1:1 background:1 addressed:1 else:1 singular:5 wealth:1 source:2 parallelization:1 anandkumar:1 call:1 extracting:2 near:4 leverage:36 noting:1 intermediate:4 yang:1 embeddings:2 relaxes:1 fft:1 variety:1 gave:1 escaping:1 reduce:3 br:3 bottleneck:3 thread:2 motivated:1 expression:1 gb:1 clarkson:3 algebraic:1 afford:1 matlab:1 deep:1 clear:1 involve:1 amount:1 simplest:3 reduced:1 http:1 nsf:1 estimated:1 per:7 dasgupta:1 express:1 achieving:1 drawn:1 capital:2 utilize:1 v1:2 icde:1 fraction:1 year:1 sum:4 run:3 letter:3 powerful:2 soda:2 draw:3 appendix:3 scaling:2 vb:5 bound:4 guaranteed:2 followed:1 cheng:5 correspondence:1 adapted:1 kronecker:5 incorporation:1 vishwanathan:1 ri:14 generates:1 optimality:2 min:3 kumar:1 performing:2 speedup:4 developing:1 according:1 alternate:1 smaller:1 slightly:1 online1:1 explained:1 pr:1 taken:1 computationally:1 equation:2 mathematik:1 discus:4 r3:2 turn:1 lathauwer:2 ge:1 end:1 adopted:1 generalizes:1 operation:3 available:1 spectral:6 appropriate:1 faloutsos:3 original:1 denotes:4 running:5 subsampling:1 top:1 log2:1 hinge:1 xc:1 giving:1 especially:1 approximating:2 tensor:95 objective:3 move:1 added:1 quantity:2 print:1 nr:2 southern:2 gradient:6 copt:2 subspace:5 distance:1 separate:1 outer:1 recsys:1 nelson:1 topic:1 trivial:1 provable:2 boldface:3 code:2 modeled:1 index:3 kk:1 ratio:1 minimizing:1 mostly:2 stoc:4 design:10 implementation:3 steurer:1 policy:1 anal:1 perform:1 tsourakakis:1 upper:2 tensorizing:1 descent:5 behave:1 jin:1 rn:1 posedness:1 rating:1 pair:1 specified:1 toolbox:1 connection:6 california:2 kang:2 barcelona:1 nip:5 address:1 usually:1 candecomp:3 sparsity:4 challenge:1 including:1 interpretability:1 max:4 memory:1 power:1 ia:7 suitable:1 difficulty:2 natural:1 representing:1 scheme:1 github:1 technology:2 extract:1 cichocki:1 speeding:2 text:1 review:9 understanding:3 yanliu:1 literature:1 mapreduce:1 multiplication:2 discovery:2 relative:3 loss:3 bear:1 sublinear:8 proportional:1 srebro:1 foundation:3 rik:1 sufficient:1 systematically:1 uncorrelated:1 pi:8 row:36 compatible:1 supported:1 last:1 free:1 enjoys:1 aij:1 barak:1 institute:3 absolute:1 sparse:13 distributed:3 dimension:10 calculated:1 world:1 gram:1 evaluating:1 author:1 adaptive:1 preprocessing:4 nguyen:1 far:2 piecemeal:1 osokin:1 transaction:1 bb:1 social:1 approximate:2 compact:2 implicitly:1 global:1 uai:2 investigating:1 conclude:2 iterative:3 table:2 matricization:4 nature:2 robust:3 obtaining:1 e5:1 kui:1 poly:1 constructing:1 official:1 pk:2 main:7 dense:5 ubj:2 big:1 noise:3 motivation:1 n2:3 representative:1 intel:1 georgia:2 fashion:2 vr:2 precision:1 sub:2 explicit:1 dehua:2 exponential:1 invocation:1 third:1 young:1 minute:1 rk:4 theorem:5 e4:1 choi:1 r2:5 essential:2 incorporating:1 corr:1 importance:11 kx:1 spatialtemporal:1 easier:1 sparser:1 phan:1 army:1 saddle:1 forming:1 jacm:1 nguy:1 krp:19 visual:1 chang:1 springer:1 satisfies:2 lewis:1 acm:1 goal:1 sized:1 viewed:2 towards:1 twofold:1 content:1 hard:1 specifically:1 openmp:1 reducing:1 lemma:2 total:2 teng:1 pas:1 svd:3 experimental:2 select:1 e6:2 latter:2 arises:1 ub:13 preparation:1 accelerated:1 incorporate:1 evaluate:1 tested:2 |
6,010 | 6,437 | Selective inference for group-sparse linear models
Rina Foygel Barber
Department of Statistics
University of Chicago
rina@uchicago.edu
Fan Yang
Department of Statistics
University of Chicago
fyang1@uchicago.edu
Prateek Jain
Microsoft Research India
prajain@microsoft.com
John Lafferty
Depts. of Statistics and Computer Science
University of Chicago
lafferty@galton.uchicago.edu
Abstract
We develop tools for selective inference in the setting of group sparsity, including
the construction of confidence intervals and p-values for testing selected groups of
variables. Our main technical result gives the precise distribution of the magnitude
of the projection of the data onto a given subspace, and enables us to develop
inference procedures for a broad class of group-sparse selection methods, including
the group lasso, iterative hard thresholding, and forward stepwise regression. We
give numerical results to illustrate these tools on simulated data and on health
record data.
1
Introduction
Significant progress has been recently made on developing inference tools to complement the feature
selection methods that have been intensively studied in the past decade [6, 5, 9]. The goal of selective
inference is to make accurate uncertainty assessments for the parameters estimated using a feature
selection algorithm, such as the lasso [12]. The fundamental challenge is that after the data have
been used to select a set of coefficients to be studied, this selection event must then be accounted
for when performing inference, using the same data. A specific goal of selective inference is to
provide p-values and confidence intervals for the fitted coefficients. As the sparsity pattern is chosen
using nonlinear estimators, the distribution of the estimated coefficients is typically non-Gaussian
and multimodal, even under a standard Gaussian noise model, making classical techniques unusable
for accurate inference. It is of particular interest to develop finite-sample, non-asymptotic results.
In this paper, we present new results for selective inference in the setting of group sparsity [15, 3, 10].
We consider the linear model Y = X? + N (0, ? 2 In ) where X ? Rn?p is a fixed design matrix. In
many applications, the p columns or features of X are naturally grouped into blocks C1 , . . . , CG ?
{1, . . . , p}. In the high dimensional setting, the working assumption is that only a few of the
corresponding blocks of the coefficients ? contain nonzero elements; that is, ?Cg = 0 for most groups
g. This group-sparse model can be viewed as an extension of the standard sparse regression model.
Algorithms for fitting this model, such as the group lasso [15], extend well-studied methods for sparse
linear regression to this grouped setting.
In the group-sparse setting, recent results of Loftus and Taylor [9] give a selective inference method
for computing p-values for each group chosen by a model selection method such as forward stepwise
regression; selection via cross-validation was studied in [9]. More generally, the inference technique
of [7] applies to any model selection method whose outcome can be described in terms of quadratic
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
conditions on Y . However, their technique cannot be used to construct confidence intervals for the
selected coefficients or for the size of the effects of the selected groups.
Our main contribution in this work is to provide a tool for constructing confidence intervals as well
as p-values for testing selected groups. In contrast to the (non-grouped) sparse regression setting,
the confidence interval construction does not follow immediately from the p-value calculation, and
requires a careful analysis of non-centered multivariate normal distributions. Our key technical result
precisely characterizes the density of kPL Y k2 (the magnitude of the projection of Y onto a given
subspace L), conditioned on a particular selection event. This ?truncated projection lemma? is the
group-wise analogue of the ?polyhedral lemma? of Lee et al. [5] for the lasso. This technical result
enables us to develop inference tools for a broad class of model selection methods, including the
group lasso [15], iterative hard thresholding [1, 4], and forward stepwise group selection [14].
In the following section we frame the problem of group-sparse inference more precisely, and present
previous results in this direction. We then state our main technical results; the proofs of the results are
given in the supplementary material. In Section 3 we show how these results can be used to develop
inferential tools for three different model selection algorithms for group sparsity. In Section 4 we
give numerical results to illustrate these tools on simulated data, as well as on the California county
health data used in previous work [9]. We conclude with a brief discussion of our work.
2
Main results: selective inference over subspaces
To establish some notation, we will write PL for the projection to any linear subspace L ? Rn , and
y
PL? for the projection to its orthogonal complement. For y ? Rn , dirL (y) = kPPLLyk
? L ? Sn?1 is
2
the unit vector in the direction of PL y. This direction is not defined if PL y = 0.
We focus on the linear model Y = X? + N (0, ? 2 In ), where X ? Rn?p is fixed and ? 2 > 0 is
assumed to be known. More generally, our model is Y ? N (?, ? 2 In ) with ? ? Rn unknown and ? 2
known. For a given block of variables Cg ? [p], we write Xg to denote the n ? |Cg | submatrix of X
consisting of all features of this block. For a set S ? [G] of blocks, XS consists of all features that
lie in any of the blocks in S.
When we refer to ?selective inference,? we are generally interested in the distribution of subsets
of parameters that have been chosen by some model selection procedure. After choosing a set of
groups S ? [G], we would like to test whether the true mean ? is correlated with a group Xg for
each g ? S after controlling for the remaining selected groups, i.e. after regressing out all the other
groups, indexed by S\g. Thus, the following question is central to selective inference:
?
Questiong,S : What is the magnitude of the projection of ? onto the span of PX
Xg ?
S\g
(1)
In particular, we are interested in a hypothesis test to determine if ? is orthogonal to this span, that
is, whether block g should be removed from the model with group-sparse support determined by S;
this is the question studied by Loftus and Taylor [9] for which they compute p-values. Alternatively,
?
we may be interested in a confidence interval on kPL ?k2 , where L = span(PX
Xg ). Since S
S\g
and g are themselves determined by the data Y , any inference on these questions must be performed
?post-selection,? by conditioning on the event that S is the selected set of groups.
2.1 Background: The polyhedral lemma
In the more standard sparse regression setting without grouped variables, after selecting a set S ? [p]
of features corresponding to columns of X, we might be interested in testing whether the column Xj
should be included in the model obtained by regressing Y onto XS\j . We may want to test the null
?
hypothesis that X>
j PXS\j ? is zero, or to construct a confidence interval for this inner product.
In the setting where S is the output of the lasso, Lee et al. [5] and Tibshirani et al. [13] characterize
the selection event as a polyhedron in Rn : for any set S ? [p] and any signs s ? {?1}S , the event
that the lasso (with a fixed regularization
parameter ?)
selects the given support with the given signs
is equivalent to the event Y ? A = y : Ay < b , where A is a fixed matrix and b is a fixed
vector, which are functions of X, S, s, ?. The inequalities are interpreted elementwise, yielding a
convex polyhedron A. To test the regression question described above, one then tests ? > ? for a fixed
?
unit vector ? ? PX
Xj . The ?polyhedral lemma?, found in [5, Theorem 5.2] and [13, Lemma
S\j
2], proves that the distribution of ? > Y , after conditioning on {Y ? A} and on P?? Y , is given by a
2
truncated normal distribution, with density
f (r) ? exp ?(r ? ? > ?)2 /2? 2 ? 1 {a1 (Y ) ? r ? a2 (Y )} .
(2)
P?? Y
The interval endpoints a1 (Y ), a2 (Y ) depend on Y only through
and are defined to include
exactly those values of r that are feasible given the event Y ? A. That is, the interval contains all
values r such that r ? ? + P?? Y ? A.
Examining (2), we see that under the null hypothesis ? > ? = 0, this is a truncated zero-mean normal
density, which can be used to construct a p-value testing ? > ? = 0. To construct a confidence interval
for ? > ?, we can instead use (2) with nonzero ? > ?, which is a truncated noncentral normal density.
2.2 The group-sparse case
In the group-sparse regression setting, Loftus and Taylor [9] extend the work of Lee et al. [5] to
questions where we would like to test PL ?, the projection of the mean ? to some potentially multidimensional subspace, rather than simply testing ? > ?, which can be interpreted as a projection to
a one-dimensional subspace, L = span(?). For a fixed set A ? Rn and a fixed subspace L of
dimension k, Loftus and Taylor [9, Theorem 3.1] prove that, after conditioning on {Y ? A}, on
dirL (Y ), and on PL? Y , under the null hypothesis PL ? = 0, the distribution of kPL Y k2 is given by
a truncated ?k distribution,
kPL Y k2 ? (? ? ?k truncated to RY ) where RY = r : r ? dirL (Y ) + PL? Y ? A .
(3)
In particular, this means that, if we would like to test the null hypothesis PL ? = 0, we can compute
a p-value using the truncated ?k distribution as our null distribution. To better understand this null
hypothesis, suppose that we run a group-sparse model selection algorithm that chooses a set of blocks
S ? [G]. We might then want to test whether some particular block g ? S should be retained in this
?
model or removed. In that case, we would set L = span(PX
Xg ) and test whether PL ? = 0.
S\g
Examining the parallels between this result and the work of Lee et al. [5], where (2) gives either
a truncated zero-mean normal or truncated noncentral normal distribution depending on whether
the null hypothesis ? > ? = 0 is true or false, we might expect that the result (3) of Loftus and
Taylor [9] can extend in a straightforward way to the case where PL ? 6= 0. More specifically, we
might expect that (3) might then be replaced by a truncated noncentral ?k distribution, with its
noncentrality parameter determined by kPL ?k2 . However, this turns out not to be the case. To
understand why, observe that kPL Y k2 and dirL (Y ) are the length and the direction of the vector
PL Y ; in the inference procedure of Loftus and Taylor [9], they need to condition on the direction
dirL (Y ) in order to compute the truncation interval RY , and then they perform inference on kPL Y k2 ,
the length. These two quantities are independent for a centered multivariate normal, and therefore if
PL ? = 0 then kPL Y k2 follows a ?k distribution even if we have conditioned on dirL (Y ). However,
in the general case where PL ? 6= 0, we do not have independence between the length and the
direction of PL Y , and so while kPL Y k2 is marginally distributed as a noncentral ?k , this is no
longer true after conditioning on dirL (Y ).
In this work, we consider the problem of computing the distribution of kPL Y k2 after conditioning
on dirL (Y ), which is the setting that we require for inference. This leads to the main contribution of
this work, where we are able to perform inference on PL ? beyond simply testing the null hypothesis
that PL ? = 0.
2.3 Key lemma: Truncated projections of Gaussians
Before presenting our key lemma, we introduce some further notation. Let A ? Rn be any fixed
open set and let L ? Rn be a fixed subspace of dimension k. For any y ? A, consider the set
Ry = {r > 0 : r ? dirL (y) + PL? y ? A} ? R+ .
Note that Ry is an open subset of R+ , and its construction does not depend on kPL yk2 , but we see
that kPL yk2 ? Ry by definition.
Lemma 1 (Truncated projection). Let A ? Rn be a fixed open set and let L ? Rn be a fixed
subspace of dimension k. Suppose that Y ? N (?, ? 2 In ). Then, conditioning on the values of
dirL (Y ) and PL? Y and on the event Y ? A, the conditional distribution of kPL Y k2 has density1
1
k?1
2
f (r) ? r
exp ? 2 r ? 2r ? hdirL (Y ), ?i ? 1 {r ? RY } .
2?
We pause to point out two special cases that are treated in the existing literature.
1
Here and throughout the paper, we ignore the possibility that Y ? L since this has probability zero.
3
Special case 1: k = 1 and A is a convex polytope. Suppose A is the convex polytope {y : Ay < b}
for fixed A ? Rm?n and b ? Rm . In this case, this almost exactly yields the ?polyhedral lemma? of
Lee et al. [5, Theorem 5.2]. Specifically, in their work they perform inference on ? > ? for a fixed
vector ?; this corresponds to taking L = span(?) in our notation. Then since k = 1, Lemma 1 yields
a truncated Gaussian distribution, coinciding with Lee et al. [5]?s result (2). The only difference
relative to [5] is that our lemma implicitly conditions on sign(? > Y ), which is not required in [5].
Special case 2: the mean ? is orthogonal to the subspace
L. In thiscase, without conditioning
on {Y ? A}, we have PL Y = PL ? + N (0, ? 2 I) = PL N (0, ? 2 I) , and so kPL Y k2 ? ? ? ?k .
Without conditioning on {Y ? A} (or equivalently, taking A = Rn ), the resulting density is then
f (r) ? rk?1 e?r
2
/2? 2
? 1 {r > 0}
which is the density of the ?k distribution (rescaled by ?), as expected. If we also condition on
{Y ? A} then this is a truncated ?k distribution, as proved in Loftus and Taylor [9, Theorem 3.1].
2.4 Selective inference on truncated projections
We now show how the key result in Lemma 1 can be used for group-sparse inference. In particular, we
show how to compute a p-value for the null hypothesis H0 : ? ? L, or equivalently, H0 : kPL ?k2 =
0. In addition, we show how to compute a one-sided confidence interval for kPL ?k2 , specifically,
how to give a lower bound on the size of this projection.
Theorem 1 (Selective inference for projections). Under the setting and notation of Lemma 1, define
R
2
2
rk?1 e?r /2? dr
r?RY ,r>kPL Y k2
R
P =
.
(4)
rk?1 e?r2 /2?2 dr
r?RY
If ? ? L (or, more generally, if hdirL (Y ), ?i = 0), then P ? Uniform[0, 1]. Furthermore, for any
desired error level ? ? (0, 1), there is a unique value L? ? R satisfying
2
2
rk?1 e?(r ?2rL? )/2?
r?RY ,r>kPL Y k2
R
rk?1 e?(r2 ?2rL? )/2?2 dr
r?RY
R
dr
= ?,
(5)
and we have
P {kPL ?k2 ? L? } ? P {hdirL (Y ), ?i ? L? } = 1 ? ?.
Finally, the p-value and the confidence interval agree in the sense that P < ? if and only if L? > 0.
From the form of Lemma 1, we see that we are actually performing inference on hdirL (Y ), ?i.
Since kPL ?k2 ? hdirL (Y ), ?i, this means that any lower bound on hdirL (Y ), ?i also gives a lower
bound on kPL ?k2 . For the p-value, the statement hdirL (Y ), ?i = 0 is implied by the stronger null
hypothesis ? ? L. We can also use Lemma 1 to give a two-sided confidence interval for hdirL (Y ), ?i;
specifically, hdirL (Y ), ?i lies in the interval [L?/2 , L1??/2 ] with probability 1 ? ?. However, in
general this cannot be extended to a two-sided interval for kPL ?k2 .
To compare to the main results of Loftus and Taylor [9], their work produces the p-value (4) testing
the null hypothesis ? ? L, but does not extend to testing PL ? beyond the null hypothesis, which the
second part (5) of our main theorem is able to do.2
3
Applications to group sparse regression methods
In this section we develop inference tools for three methods for group-sparse model selection: forward
stepwise regression (also considered by Loftus and Taylor [9] with results on hypothesis testing),
iterative hard thresholding (IHT), and the group lasso.
2
Their work furthermore considers the special case where the conditioning event, Y ? A, is determined by a
?quadratic selection rule,? that is, A is defined by a set of quadratic constraints on y ? Rn . However, extending
to the general case is merely a question of computation (as we explore below for performing inference for the
group lasso) and this extension should not be viewed as a primary contribution of this work.
4
3.1 General recipe
With a fixed design matrix, the outcome of any group-sparse selection method is a function of Y .
For example, a forward stepwise procedure determines a particular sequence of groups of variables.
We call such an outcome a selection event, and assume that the set of all selection events forms a
countable partition of Rn into disjoint open sets: Rn = ?e Ae .3 Each data vector y ? Rn determines
a selection event, denoted e(y), and thus y ? Ae(y) .
Let S(y) ? [G] be the set of groups selected for testing. This is assumed to be a function of e(y),
?
i.e. S(y) = Se for all y ? Ae . For any g ? Se , let Le,g = span(PX
Xg ), the subspace of Rn
Se \g
indicating correlation with group Xg beyond what can be explained by the other selected groups.
Write RY = {r > 0 : r ? U + Y? ? Ae(Y ) }, where U = dirLe(Y ),g (Y ) and Y? = PL?e(Y ),g Y . If
we condition on the event {Y ? Ae } for some e, then as soon as we have calculated the region
RY ? R+ , Theorem 1 will allow us to perform inference on the quantity of interest kPLe,g ?k2
by evaluating the expressions (4) and (5). In other words, we are testing whether ? is significantly
correlated with the group Xg , after controlling for all the other selected groups, S(Y )\g = Se \g.
To evaluate these expressions accurately, ideally we would like an explicit characterization of the
region RY ? R+ . To gain a better intuition for this set, define zY (r) = r ? U + Y? ? Rn for r > 0,
and note that zY (r) = Y when we plug in r = kPLe(Y ),g Y k2 . Then we see that
RY = r > 0 : e(zY (r)) = e(Y ) .
(6)
In other words, we need to find the range of values of r such that, if we replace Y with zY (r), then
this does not change the output of the model selection algorithm, i.e. e(zY (r)) = e(Y ). For the
forward stepwise and IHT methods, we find that we can calculate RY explicitly. For the group
lasso, we cannot calculate RY explicitly, but we can nonetheless compute the integrals required by
Theorem 1 through numerical approximations. We now present the details for each of these methods.
3.2 Forward stepwise regression
Forward stepwise regression [2, 14] is a simple and widely used method. We will use the following
version:4 for design matrix X and response Y = y,
1. Initialize the residual b
0 = y and the model S0 = ?.
2. For t = 1, 2, . . . , T ,
(a) Let gt = arg maxg?[G]\St?1 {kX>
t?1 k2 }.
gb
?
(b) Update the model, St = {g1 , . . . , gt }, and update the residual, b
t = PX
y.
St
Testing all groups at time T . First we consider the inference procedure where, at time T , we would
like to test each selected group gt for t = 1, . . . , T ; inference for this procedure was derived also
in [8]. Our selection event e(Y ) is the ordered sequence g1 , . . . , gT of selected groups. For a response
vector Y = y, this selection event is equivalent to
> ?
?
kX>
(7)
gk PXSk?1 yk2 > kXg PXSk?1 yk2 for all k = 1, . . . , T , for all g 6? Sk .
Now we would like to perform inference on the group g = gt , while controlling for the other groups
in S(Y ) = ST . Define U , Y? , and zY (r) as before. Then, to determine RY = {r > 0 : zY (r) ?
Ae(Y ) }, we check whether all of the inequalities in (7) are satisfied with y = zY (r): for each
k = 1, . . . , T and each g 6? Sk , the corresponding inequality of (7) can be expressed as
?
r2 ? kX>
gk PXS
k?1
?
U k22 + 2r ? hX>
gk PXS
k?1
?
> r2 ? kX>
g PXS
k?1
?
U , X>
gk PXS
k?1
?
U k22 + 2r ? hX>
g PXS
k?1
?
Y? i + kX>
gk PXS
k?1
?
U , X>
g PXS
k?1
Y? k22
?
Y? i + kX>
g PXS
k?1
Y? k22 .
Solving this quadratic inequality over r ? R+ , we obtain a region Ik,g ? R+ which is either a single
interval or a union of two disjoint intervals, whose endpoints we can calculate explicitly with the
quadratic formula. The set RY is then given by all values r that satisfy the full set of inequalities:
\
\
RY =
Ik,g .
k=1,...,T g?[G]\Sk
This is a union of finitely many disjoint intervals, whose endpoints are calculated explicitly as above.
Since the distribution of Y is continuous on Rn , we ignore sets of measure zero without further comment.
In practice, we would add some correction for the scale of the columns of Xg or for the number of features
in group g; this can be accomplished with simple modifications of the forward stepwise procedure.
3
4
5
Sequential testing. Now suppose we carry out a sequential inference procedure, testing group gt
at its time of selection, controlling only for the previously selected groups St?1 . In fact, this is a
special case of the non-sequential procedure above, which shows how to test gT while controlling
for ST \gT = ST ?1 . Applying this method at each stage of the algorithm yields a sequential testing
procedure. (The method developed in [9] computes p-values for this problem, testing whether
?
? ? PX
Xgt at each time t.) See the supplementary material for detailed pseudo-code.
S
t?1
3.3 Iterative hard thresholding (IHT)
The iterative hard thresholding algorithm finds a k-group-sparse solution to the linear regression
problem, iterating gradient descent steps with hard thresholding to update the model choice as needed
[1, 4]. Given k ? 1, number of iterations T , step sizes ?t , design matrix X and response Y = y,
1. Initialize the coefficient vector, ?0 = 0 ? Rp (or any other desired initial point).
2. For t = 1, 2, . . . , T ,
(a) Take a gradient step, ?et = ?t?1 ? ?t X> (X?t?1 ? y).
(b) Compute k(?et )C k2 for each g ? [G] and let St ? [G] index the k largest norms.
g
(c) Update the fitted coefficients ?t via (?t )j = (?et )j ? 1 {j ? ?g?St Cg }.
Here we are typically interested in testing Questiong,ST for each g ? ST . We condition on the
selection event, e(Y ), given by the sequence of k-group-sparse models S1 , . . . , ST selected at each
stage of the algorithm, which is characterized by the inequalities
k(?et )Cg k2 > k(?et )Ch k2 for all t = 1, . . . , T , and all g ? St , h 6? St .
(8)
Fixing a group g ? ST to test, determining RY = {r > 0 : zY (r) ? Ae(Y ) } involves checking
whether all of the inequalities in (8) are satisfied with y = zY (r). First, with the response Y replaced
by y = zY (r), we show that we can write ?et = r ? ct + dt for each t = 1, . . . , T , where ct , dt ? Rp
are independent of r; in the supplementary material, we derive ct , dt inductively as
c1 = ?n1 X> U,
ct = (Ip ? ?nt X> X)PSt?1 ct?1 + ?nt X> U,
for t ? 2.
?1
?1
>
>
d1 = (I ? n X X)?0 + n X Y? ,
dt = (Ip ? ?nt X> X)PSt?1 dt?1 + ?nt X> Y?
Now we compute the region RY . For each t = 1, . . . , T and each g ? St , h 6? St , the corresponding
inequality in (8), after writing ?et = r ? ct + dt , can be expressed as
r2 ?k(ct )Cg k22 +2r?h(ct )Cg , (dt )Cg i+k(dt )Cg k22 > r2 ?k(ct )Ch k22 +2r?h(ct )Ch , (dt )Ch i+k(dt )Ch k22 .
As for the forward stepwise procedure, solving this quadratic inequality over r ? R+ , we obtain a
region It,g,h ? R+ that is either a single interval or a union
T of two
T disjoint
T intervals whose endpoints
we can calculate explicitly. Finally, we obtain RY = t=1,...,T g?St h?[G]\St It,g,h .
3.4 The group lasso
The group lasso, first introduced by Yuan and Lin [15], is a convex optimization method for linear
regression where the form of the penalty is designed to encourage group-wise sparsity of the solution.
It is an extension of the lasso method [12] for linear regression. The method is given by
P
?b = arg min? 12 ky ? X?k22 + ? g k?Cg k2 ,
P
where ? > 0 is a penalty parameter. The penalty g k?Cg k2 promotes sparsity at the group level.5
b We would like
For this method, we perform inference on the group support S of the fitted model ?.
to test Questiong,S for each g ? S. In this setting, for groups of size ? 2, we believe that it is not
possible to analytically calculate RY , and furthermore, that there is no additional information that we
can condition on to make this computation possible, without losing all power to do inference.
We thus propose a numerical approximation that circumvents the need for an explicit calculation of
RY . Examining the calculation of the p-value P and the lower bound L? in Theorem 1, we see that
we can write P = fY (0) and can find L? as the unique solution to fY (L? ) = ?, where
h
i
2
Er????k ert/? ? 1 {r ? RY , r > kPL Y k2 }
fY (t) =
,
Er????k ert/?2 ? 1 {r ? RY }
5
Our method can also be applied to a modification of group lasso designed for overlapping groups [3] with a
nearly identical procedure but we do not give details here.
6
where we treat Y as fixed in this calculation and set k = dim(L) = rank(XS\g ). Both the numerator
and denominator can be approximated by taking a large number B of samples r ? ? ? ?k and taking
the empirical expectations. Checking r ? RY is equivalent to running the group lasso with the
response replaced by y = zY (r), and checking if the resulting selected model remains unchanged.
This may be problematic, however, if RY is in the tails of the ? ? ?k distribution. We implement
an importance sampling approach by repeatedly drawing r ? ? for some density ?; we find that
? = kPL Y k2 + N (0, ? 2 ) works well in practice. Given samples r1 , . . . , rB ? ? we then estimate
P ????k (rb ) rb t/?2
?e
? 1 {rb ? RY , rb > kPL Y k2 }
b
?(rb )
b
fY (t) ? fY (t) :=
P ????k (rb ) r t/?2
? 1 {rb ? RY }
?e b
b
?(rb )
where ????k is the density of the ? ? ?k distribution. We then estimate P ? Pb = fbY (0). Finally, since
fbY (t) is continuous and strictly increasing in t, we estimate L? by numerically solving fbY (t) = ?.
4
Experiments
In this section we present results from experiments on simulated and real data, performed in R [11].6
4.1 Simulated data
We fix sample size n = 500 and G = 50 groups each of size 10. For each trial, we generate a design
matrix X with i.i.d. N (0, 1/n) entries, set ? with its first 50 entries (corresponding to first s = 5
groups) equal to ? and all other entries equal to 0, and set Y = X? + N (0, In ). We present the
result for IHT here; the results for the other two methods can be found in the supplementary material.
We run IHT to select k = 10 groups over T = 5 iterations, with step sizes ?t = 2 and initial point
?0 = 0. For a moderate signal strength ? = 1.5, we plot the p-values for each selected group in
Figure 1; each group displays p-values only for those trials in which it was selected. The histogram of
p-values for the s true signals and for the G ? s nulls are also shown. We see that the the distribution
of p-values for the true signals concentrates near zero while the null p-values are roughly uniform.
Next we look at the confidence intervals given by our method, examining their empirical coverage
across different signal strengths ? in Figure 2. We fix confidence level 0.9 (i.e. ? = 0.1) and check
empirical coverage with respect to both kPL ?k2 and hdirL (Y ), ?i, with results shown separately
for true signals and for nulls. For true signals, the confidence interval for kPL ?k2 is somewhat
conservative while the coverage for hdirL (Y ), ?i is right at the target level, as expected from our
theory. As signal strength ? increases, the gap is reduced for the true signals; this is because
dirL (Y ) becomes an increasingly more accurate estimate of dirL (?), and so the gap in the inequality
kPL ?k2 ? hdirL (Y ), ?i is reduced. For the nulls, if the set of selected groups contains the support
of the true model, which is nearly always true for higher signal levels ? , then the two are equivalent
(as kPL ?k2 = hdirL (Y ), ?i = 0), and coverage is at the target level. At low signal levels ? , however,
a true group is occasionally missed, in which case kPL ?k2 > hdirL (Y ), ?i strictly.
???
???????
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
???
???
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
??
??
??
??????????????????????????????????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
???
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
??
Figure 1: Iterative hard thresholding (IHT). For each group, we plot its p-value for each trial in which
that group was selected, for 200 trials. Histograms of the p-values for true signals (left, red) and for
nulls (right, gray) are attached.
4.2 California health data
We examine the 2015 California county health data7 which was also studied by Loftus and Taylor
[9]. We fit a linear model where the response is the log-years of potential life lost and the covariates
6
7
Code reproducing experiments: http://www.stat.uchicago.edu/~rina/group_inf.html
Available at http://www.countyhealthrankings.org
7
????
????
????
????????????
?????????????????????
??????????????
???????????????????????
????
????
?
?
?
?
??
Figure 2: Iterative hard thresholding (IHT). Empirical coverage over 2000 trials with signal strength
? . ?Norm? and ?inner product? refer to coverage of kPL ?k2 and hdirL (Y ), ?i, respectively.
are the 34 predictors in this data set. We first let each predictor be its own group (i.e., group
size 1) and run the three algorithms considered in Section 3. Next, we form a grouped model by
expanding each predictor Xj into a group using the first three non-constant Legendre polynomials,
(Xj , 12 (3X2j ? 1), 12 (5X3j ? 3Xj )). In each case we set parameters so that 8 groups are selected. The
selected groups and their p-values are given in Table 1; interestingly, even when the same predictor is
selected by multiple methods, its p-value can differ substantially across the different methods.
Group size
1
3
Forward stepwise p-value / seq. p-value
80th percentile income
0.116 / 0.000
Injury death rate
0.000 / 0.000
Violent crime rate
0.016 / 0.000
% Receiving HbA1c
0.591 / 0.839
% Obese
0.481 / 0.464
Chlamydia rate
0.944 / 0.975
% Physically inactive
0.654 / 0.812
% Alcohol-impaired
0.104 / 0.104
80th percentile income
0.001 / 0.000
Injury death rate
0.044 / 0.000
Violent crime rate
0.793 / 0.617
% Physically inactive
0.507 / 0.249
% Alcohol-impaired
0.892 / 0.933
% Severe housing problems
0.119 / 0.496
Chlamydia rate
0.188 / 0.099
Preventable hospital stays rate 0.421 / 0.421
Iterative hard thresholding p-value
80th percentile income
0.000
Injury death rate
0.000
% Smokers
0.004
% Single-parent household 0.009
% Children in poverty
0.332
Physically unhealthy days 0.716
Food environment index
0.807
Mentally unhealthy days
0.957
Injury death rate
0.000
80th percentile income
0.000
% Smokers
0.000
% Single-parent household 0.005
Food environment index
0.057
% Children in poverty
0.388
Physically unhealthy days 0.713
Mentally unhealthy days
0.977
Group lasso p-value
80th percentile income
% Obese
% Physically inactive
Violent crime rate
% Single-parent household
Injury death rate
% Smokers
Preventable hospital stays rate
80th percentile income
Injury death rate
% Single-parent household
% Physically inactive
% Obese
% Alcohol-impaired
% Smokers
Violent crime rate
0.000
0.007
0.040
0.055
0.075
0.235
0.701
0.932
0.000
0.000
0.038
0.043
0.339
0.366
0.372
0.629
Table 1: Selective p-values for the California county health data experiment. The predictors obtained
with forward stepwise are tested both simultaneously at the end of the procedure (first p-value shown),
and also tested sequentially (second p-value shown), and are displayed in the selected order.
5
Conclusion
We develop selective inference tools for group-sparse linear regression methods, where for a datadependent selected set of groups S, we are able to both test each group g ? S for inclusion in the
model defined by S, and form a confidence interval for the effect size of group g in the model. Our
theoretical results can be easily applied to a range of commonly used group-sparse regression methods,
thus providing an efficient tool for finite-sample inference that correctly accounts for data-dependent
model selection in the group-sparse setting.
Acknowledgments
Research supported in part by ONR grant N00014-15-1-2379, and NSF grants DMS-1513594 and
DMS-1547396.
8
References
[1] Thomas Blumensath and Mike E Davies. Sampling theorems for signals from the union of finitedimensional linear subspaces. Information Theory, IEEE Transactions on, 55(4):1872?1882,
2009.
[2] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning.
Springer Series in Statistics. Springer New York Inc., New York, NY, USA, 2001.
[3] Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and
graph lasso. In Proceedings of the 26th annual international conference on machine learning,
pages 433?440. ACM, 2009.
[4] Prateek Jain, Nikhil Rao, and Inderjit S. Dhillon. Structured sparse regression via greedy hardthresholding. CoRR, abs/1602.06042, 2016. URL http://arxiv.org/abs/1602.06042.
[5] Jason D Lee, Dennis L Sun, Yuekai Sun, and Jonathan E Taylor. Exact post-selection inference
with the lasso. arXiv preprint arXiv:1311.6238, 2013.
[6] Jason D. Lee and Jonathan E. Taylor. Exact post model selection inference for marginal
screening. In Advances in Neural Information Processing Systems 27, pages 136?144, 2014.
[7] Joshua R Loftus. Selective inference after cross-validation. arXiv preprint arXiv:1511.08866,
2015.
[8] Joshua R Loftus and Jonathan E Taylor. A significance test for forward stepwise model selection.
arXiv preprint arXiv:1405.3920, 2014.
[9] Joshua R. Loftus and Jonathan E. Taylor. Selective inference in regression models with groups
of variables. arXiv:1511.01478, 2015.
[10] Sofia Mosci, Silvia Villa, Alessandro Verri, and Lorenzo Rosasco. A primal-dual algorithm
for group sparse regularization with overlapping groups. In Advances in Neural Information
Processing Systems 23, pages 2604?2612, 2010.
[11] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for
Statistical Computing, Vienna, Austria, 2016. URL https://www.R-project.org/.
[12] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society. Series B (Methodological), pages 267?288, 1996.
[13] Ryan J Tibshirani, Jonathan Taylor, Richard Lockhart, and Robert Tibshirani. Exact postselection inference for sequential regression procedures. arXiv preprint arXiv:1401.3889,
2014.
[14] Joel A. Tropp and Anna C. Gilbert. Signal recovery from random measurements via orthogonal
matching pursuit. IEEE Trans. Information Theory, 53(12):4655?4666, 2007. doi: 10.1109/TIT.
2007.909108. URL http://dx.doi.org/10.1109/TIT.2007.909108.
[15] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
9
| 6437 |@word trial:5 version:1 polynomial:1 stronger:1 norm:2 open:4 jacob:1 carry:1 initial:2 contains:2 series:3 selecting:1 interestingly:1 past:1 existing:1 com:1 nt:4 dx:1 must:2 john:1 chicago:3 numerical:4 partition:1 enables:2 designed:2 plot:2 update:4 greedy:1 selected:23 core:1 record:1 characterization:1 org:4 ik:2 yuan:2 consists:1 prove:1 blumensath:1 fitting:1 polyhedral:4 introduce:1 mosci:1 expected:2 roughly:1 themselves:1 examine:1 ry:31 ming:1 food:2 increasing:1 becomes:1 spain:1 project:1 notation:4 null:17 prateek:2 what:2 interpreted:2 substantially:1 developed:1 x3j:1 pseudo:1 multidimensional:1 exactly:2 k2:37 rm:2 unit:2 grant:2 before:2 treat:1 laurent:1 might:5 studied:6 range:2 unique:2 acknowledgment:1 testing:17 lost:1 union:4 block:9 practice:2 implement:1 procedure:14 empirical:4 significantly:1 vert:1 projection:13 inferential:1 confidence:15 word:2 davy:1 matching:1 onto:4 cannot:3 selection:32 applying:1 writing:1 www:3 equivalent:4 gilbert:1 straightforward:1 convex:4 recovery:1 immediately:1 estimator:1 rule:1 d1:1 ert:2 construction:3 controlling:5 suppose:4 target:2 exact:3 losing:1 hypothesis:13 element:2 satisfying:1 approximated:1 mike:1 preprint:4 calculate:5 region:5 rina:3 sun:2 removed:2 rescaled:1 alessandro:1 intuition:1 environment:3 covariates:1 ideally:1 inductively:1 preventable:2 depend:2 solving:3 tit:2 multimodal:1 easily:1 jain:2 doi:2 outcome:3 choosing:1 h0:2 whose:4 jean:1 supplementary:4 widely:1 nikhil:1 drawing:1 statistic:4 g1:2 ip:2 housing:1 sequence:3 propose:1 product:2 noncentrality:1 ky:1 recipe:1 parent:4 impaired:3 extending:1 r1:1 produce:1 noncentral:4 illustrate:2 develop:7 depending:1 stat:1 fixing:1 derive:1 finitely:1 progress:1 coverage:6 involves:1 differ:1 direction:6 concentrate:1 centered:2 material:4 require:1 hx:2 fix:2 county:3 ryan:1 extension:3 pl:24 correction:1 strictly:2 considered:2 normal:7 exp:2 a2:2 estimation:1 violent:4 grouped:6 density1:1 largest:1 tool:10 gaussian:3 always:1 rather:1 shrinkage:1 derived:1 focus:1 methodological:1 polyhedron:2 check:2 rank:1 contrast:1 cg:12 sense:1 dim:1 inference:42 dependent:1 unhealthy:4 typically:2 selective:15 interested:5 selects:1 arg:2 dual:1 html:1 denoted:1 special:5 initialize:2 marginal:1 equal:2 construct:4 sampling:2 identical:1 broad:2 look:1 nearly:2 richard:1 few:1 simultaneously:1 poverty:2 replaced:3 consisting:1 microsoft:2 n1:1 friedman:1 ab:2 interest:2 screening:1 possibility:1 regressing:2 severe:1 joel:1 yielding:1 primal:1 accurate:3 integral:1 encourage:1 orthogonal:4 indexed:1 taylor:15 desired:2 theoretical:1 fitted:3 column:4 rao:1 injury:6 subset:2 entry:3 uniform:2 predictor:5 examining:4 characterize:1 chooses:1 st:19 density:8 fundamental:1 international:1 stay:2 lee:8 receiving:1 central:1 satisfied:2 rosasco:1 dr:4 account:1 potential:1 coefficient:7 inc:1 satisfy:1 explicitly:5 performed:2 jason:2 characterizes:1 red:1 parallel:1 contribution:3 kxg:1 yield:3 accurately:1 zy:12 marginally:1 trevor:1 iht:7 definition:1 nonetheless:1 dm:2 naturally:1 proof:1 gain:1 proved:1 pst:2 intensively:1 austria:1 actually:1 higher:1 dt:10 day:4 follow:1 methodology:1 coinciding:1 response:6 verri:1 furthermore:3 stage:2 correlation:1 jerome:1 working:1 dennis:1 tropp:1 nonlinear:1 assessment:1 overlapping:2 gray:1 believe:1 usa:1 effect:2 k22:9 contain:1 true:12 regularization:2 analytically:1 nonzero:2 death:6 dhillon:1 numerator:1 percentile:6 presenting:1 ay:2 chlamydia:2 l1:1 wise:2 recently:1 mentally:2 rl:2 conditioning:9 endpoint:4 attached:1 extend:4 tail:1 elementwise:1 numerically:1 significant:1 refer:2 measurement:1 inclusion:1 language:1 longer:1 yk2:4 gt:8 add:1 multivariate:2 own:1 recent:1 moderate:1 occasionally:1 n00014:1 inequality:10 onr:1 life:1 accomplished:1 joshua:3 yi:1 additional:1 somewhat:1 xgt:1 determine:2 signal:14 full:1 multiple:1 yuekai:1 technical:4 characterized:1 calculation:4 cross:2 plug:1 lin:2 post:3 promotes:1 a1:2 regression:22 ae:7 denominator:1 expectation:1 physically:6 iteration:2 histogram:2 arxiv:10 c1:2 background:1 want:2 addition:1 separately:1 interval:24 comment:1 lafferty:2 call:1 near:1 yang:1 xj:5 independence:1 fit:1 hastie:1 lasso:20 inner:2 inactive:4 whether:10 expression:2 gb:1 url:3 penalty:3 york:2 repeatedly:1 generally:4 iterating:1 se:4 detailed:1 reduced:2 generate:1 http:5 problematic:1 nsf:1 sign:3 estimated:2 disjoint:4 tibshirani:5 rb:9 correctly:1 write:5 group:86 key:4 pb:1 loftus:13 graph:1 merely:1 year:1 run:3 uncertainty:1 throughout:1 almost:1 missed:1 seq:1 circumvents:1 submatrix:1 bound:4 ct:10 display:1 fan:1 quadratic:6 annual:1 strength:4 precisely:2 constraint:1 span:7 min:1 performing:3 px:16 department:2 developing:1 structured:1 legendre:1 across:2 increasingly:1 making:1 modification:2 s1:1 explained:1 sided:3 agree:1 previously:1 foygel:1 turn:1 remains:1 needed:1 prajain:1 end:1 available:1 gaussians:1 pursuit:1 observe:1 rp:2 thomas:1 remaining:1 include:1 running:1 vienna:1 household:4 prof:1 establish:1 classical:1 society:2 unchanged:1 implied:1 question:6 quantity:2 primary:1 villa:1 gradient:2 subspace:12 simulated:4 polytope:2 barber:1 considers:1 fy:5 length:3 code:2 retained:1 index:3 providing:1 equivalently:2 robert:3 potentially:1 statement:1 gk:5 design:5 countable:1 unknown:1 galton:1 perform:6 finite:2 descent:1 displayed:1 truncated:15 philippe:1 extended:1 precise:1 team:1 frame:1 rn:19 reproducing:1 introduced:1 complement:2 required:2 crime:4 california:4 barcelona:1 nip:1 trans:1 able:3 beyond:3 below:1 pattern:1 sparsity:6 challenge:1 including:3 royal:2 analogue:1 power:1 event:16 overlap:1 treated:1 pause:1 residual:2 alcohol:3 brief:1 lorenzo:1 xg:9 health:5 sn:1 literature:1 checking:3 determining:1 asymptotic:1 relative:1 expect:2 validation:2 foundation:1 s0:1 thresholding:9 accounted:1 supported:1 truncation:1 soon:1 uchicago:4 understand:2 allow:1 india:1 taking:4 sparse:24 distributed:1 dimension:3 calculated:2 evaluating:1 finitedimensional:1 computes:1 forward:13 made:1 commonly:1 income:6 kpl:31 transaction:1 ignore:2 implicitly:1 sequentially:1 conclude:1 assumed:2 alternatively:1 maxg:1 iterative:8 continuous:2 decade:1 sk:3 why:1 table:2 expanding:1 lockhart:1 constructing:1 anna:1 significance:1 main:7 silvia:1 noise:1 sofia:1 child:2 ny:1 explicit:2 lie:2 theorem:10 rk:5 unusable:1 formula:1 specific:1 er:2 r2:6 x:3 stepwise:13 false:1 sequential:5 corr:1 importance:1 magnitude:3 conditioned:2 depts:1 kx:6 gap:2 smoker:4 simply:2 explore:1 expressed:2 ordered:1 datadependent:1 inderjit:1 applies:1 springer:2 ch:5 corresponds:1 determines:2 acm:1 obozinski:1 conditional:1 goal:2 viewed:2 careful:1 replace:1 feasible:1 hard:9 change:1 included:1 determined:4 specifically:4 lemma:15 conservative:1 x2j:1 hospital:2 indicating:1 select:2 guillaume:1 support:4 jonathan:5 obese:3 evaluate:1 tested:2 correlated:2 |
6,011 | 6,438 | Accelerating Stochastic Composition Optimization
Mengdi Wang? , Ji Liu? , and Ethan X. Fang
Princeton University, University of Rochester, Pennsylvania State University
mengdiw@princeton.edu, ji.liu.uwisc@gmail.com, xxf13@psu.edu
Abstract
Consider the stochastic composition optimization problem where the objective is a
composition of two expected-value functions. We propose a new stochastic firstorder method, namely the accelerated stochastic compositional proximal gradient
(ASC-PG) method, which updates based on queries to the sampling oracle using
two different timescales. The ASC-PG is the first proximal gradient method for
the stochastic composition problem that can deal with nonsmooth regularization
penalty. We show that the ASC-PG exhibits faster convergence than the best known
algorithms, and that it achieves the optimal sample-error complexity in several
important special cases. We further demonstrate the application of ASC-PG to
reinforcement learning and conduct numerical experiments.
1
Introduction
The popular stochastic gradient methods are well suited for minimizing expected-value objective
functions or the sum of a large number of loss functions. Stochastic gradient methods find wide
applications in estimation, online learning, and training of deep neural networks. Despite their
popularity, they do not apply to the minimization of a nonlinear function involving expected values or
a composition between two expected-value functions.
In this paper, we consider the stochastic composition problem, given by
min
x2<n
H(x) := Ev (fv (Ew (gw (x)))) +R(x)
|
{z
}
(1)
=:F (x)
where (f g)(x) = f (g(x)) denotes the function composition, gw (?) : <n 7! <m and
fv (?) : <m 7! < are continuously differentiable functions, v, w are random variables, and
R(x) : <n 7! < [ {+1} is an extended real-valued closed convex function. We assume throughout
that there exists at least one optimal solution x? to problem (1). We focus on the case where fv and
gw are smooth, but we allow R to be a nonsmooth penalty such as the `1 -norm. We do no require
either the outer function fv or the inner function gw to be convex or monotone. As a result, the
composition problem cannot be reformulated into a saddle point problem in general.
Our algorithmic objective is to develop efficient algorithms for solving problem (1) based on random
evaluations of fv , gw and their gradients. Our theoretical objective is to analyze the rate of convergence for the stochastic algorithm and to improve it when possible. In the online setting, the iteration
complexity of our stochastic methods can be interpreted as a sample-error complexity upper bound
for estimating the optimal solution of problem (1).
1.1
Motivating Examples
One motivating example is reinforcement learning [Sutton and Barto, 1998]. Consider a controllable
Markov chain with states 1, . . . , S. Estimating the value-per-state of a fixed control policy ? is known
?
Equal contribution.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
as on-policy learning. It can be casted into an S ? S system of Bellman equations:
P ? V ? + r? = V ? ,
?
where 2 (0, 1) is a discount factor, Ps?
?, and rs? is
s is the transition probability from state s to state s
?
the expected state transition reward at state s. The solution V to the Bellman equation is the value
vector, with V ? (s) being the total expected reward starting at state s. In the blackbox simulation
environment, P ? , r? are unknown but can be sampled from a simulator. As a result, solving the
Bellman equation becomes a special case of the stochastic composition optimization problem:
min
x2<S
kE[A]x
E[b]k2 ,
(2)
where A, b are random matrices and random vectors such that E[A] = I
P ? and E[b] = r? . It
can be viewed as the composition of the square norm function and the expected linear function. We
will give more details on the reinforcement learning application in Section 4.
Another motivating example is risk-averse learning. For example, consider the mean-variance
minimization problem
min Ea,b [h(x; a, b)] + Vara,b [h(x; a, b)],
x
where h(x; a, b) is some loss function parameterized by random variables a and b, and
regularization parameter. Its batch version takes the form
!2
N
N
N
X
1 X
1 X
min
h(x; ai , bi ) +
h(x; ai , bi )
h(x; ai , bi ) .
x
N i=1
N i=1
N i=1
> 0 is a
Here the variance term is the composition of the mean square function and an expected loss function.
Although the stochastic composition problem (1) was barely studied, it actually finds a broad spectrum
of emerging applications in estimation and machine learning (see Wang et al. [2016] for a list of
applications). Fast optimization algorithms with theoretical guarantees will lead to new computation
tools and online learning methods for a broader problem class, no longer limited to the expectation
minimization problem.
1.2
Related Works and Contributions
Contrary to the expectation minimization problem, ?unbiased" gradient samples are no longer
available for the stochastic composition problem (1). The objective is nonlinear in the joint probability
distribution of (w, v), which substantially complicates the problem. In a recent work by Dentcheva
et al. [2015], a special case of the stochastic composition problem, i.e., risk-averse optimization,
has been studied. A central limit theorem has been established,
p showing that the K-sample batch
problem converges to the true problem at the rate of O(1/ K) in a proper sense. For the case
where R(x) = 0, Wang et al. [2016] has proposed and analyzed a class of stochastic compositional
gradient/subgradient methods (SCGD). The SCGD involves two iterations of different time scales,
one for estimating x? by a stochastic quasi-gradient iteration, the other for maintaining a running
estimate of g(x? ). Wang and Liu [2016] studies the SCGD in the setting where samples are corrupted
with Markov noises (instead of i.i.d. zero-mean noises). Both works establish almost sure convergence
of the algorithm and several convergence rate results, which are the best-known convergence rate
prior to the current paper.
The idea of using two-timescale quasi-gradient traced back to the earlier work Ermoliev [1976]. The
incremental treatment of proximal gradient iteration has been studied extensively for the expectation
minimization problem, see for examples Beck and Teboulle [2009], Bertsekas [2011], Ghadimi and
Lan [2015], Gurbuzbalaban et al. [2015], Nedi?c [2011], Nedi?c and Bertsekas [2001], Nemirovski
et al. [2009], Rakhlin et al. [2012], Shamir and Zhang [2013], Wang and Bertsekas [2016], Wang et al.
[2015]. However, except for Wang et al. [2016] and Wang and Liu [2016], all of these works focus
on variants of the expectation minimization problem and do not apply to the stochastic composition
problem (1). Another work partially related to this paper is by Dai et al. [2016]. They consider a
special case of problem (1) arising in kernel estimation, where they assume that all functions fv ?s are
convex and their conjugate functions fv? ?s can be easily obtained/sampled. Under these additional
assumptions, they essentially rewrite the problem into a saddle point optimization involving functional
variables.
2
In this paper, we propose a new accelerated stochastic compositional proximal gradient (ASC-PG)
method that applies to the penalized problem (1), which is a more general problem than the one
considered in Wang et al. [2016]. We use a coupled martingale stochastic analysis to show that
ASC-PG achieves significantly better sample-error complexity in various cases. We also show that
ASC-PG exhibits optimal sample-error complexity in two important special cases: the case where the
outer function is linear and the case where the inner function is linear.
Our contributions are summarized as follows:
1. We propose the first stochastic proximal-gradient method for the stochastic composition problem.
This is the first algorithm that is able to address the nonsmooth regularization penalty R(?) without
deteriorating the convergence rate.
2. We obtain a convergence rate O(K 4/9 ) for smooth optimization problems that are not necessarily
convex, where K is the number of queries to the stochastic first-order oracle. This improves the best
known convergence rate and provides a new benchmark for the stochastic composition problem.
3. We provide a comprehensive analysis and results that apply to various special cases. In particular,
our results contain as
pspecial cases the known optimal rate results for the expectation minimization
problem, i.e., O(1/ K) for general objectives and O(1/K) for strongly convex objectives.
4. In the special case where the inner function g(?) is a linear mapping, we show that it is sufficient
to use one timescale to guarantee convergence. Our result achieves the non-improvable
rate of
p
convergence O(1/K) for optimal strongly convex optimization and O(1/ K) for nonconvex
smooth optimization. It implies that the inner linearity does not bring fundamental difficulty to the
stochastic composition problem.
5. We show that the proposed method leads to a new on-policy reinforcement
learning algorithm.
p
The new learning algorithm achieves the optimal convergence rate O(1/ K) for solving Bellman
equations (or O(1/K) for solving the least square problem) based on K observations of state-tostate transitions.
In comparison with Wang et al. [2016], our analysis is more succinct and leads to stronger results.
To the best of our knowledge, Theorems 1 and 2 in this paper provide the best-known rates for the
stochastic composition problem.
Paper Organization. Section 2 states the sampling oracle and the accelerated stochastic compositional proximal gradient algorithm (ASC-PG). Section 3 states the convergence rate results in the case
of general nonconvex objective and in the case of strongly convex objective, respectively. Section 4
describes an application of ASC-PG to reinforcement learning and gives numerical experiments.
Notations and Definitions.
For x 2 <n , we denote by x0 its transpose, and by kxk its Euclidean
p
0
norm (i.e., kxk= x x). For two sequences {yk } and {zk }, we write yk = O(zk ) if there exists
a constant c > 0 such that kyk k? ckzk k for each k. We denote by Ivalue
condition the indicator function,
which returns ?value? if the ?condition? is satisfied; otherwise 0. We denote by H ? the optimal
objective function value of problem (1), denote by X ? the set of optimal solutions, and denote by
PS (x) the Euclidean projection of x onto S for any convex set S. We also denote by short that
f (y) = Ev [fv (y)] and g(x) = Ew [gw (x)].
2
Algorithm
We focus on the black-box sampling environment. Suppose that we have access to a stochastic
first-order oracle, which returns random realizations of first-order information upon queries. This
is a typical simulation oracle that is available in both online and batch learning. More specifically,
assume that we are given a Sampling Oracle (SO) such that
? Given some x 2 <n , the SO returns a random vector gw (x) and a noisy subgradient rgw (x).
? Given some y 2 <m , the SO returns a noisy gradient rfv (y).
Now we propose the Accelerated Stochastic Compositional Proximal Gradient (ASC-PG) algorithm,
see Algorithm 1. ASC-PG is a generalization of the SCGD proposed by Wang et al. [2016], in which
a proximal step is used to replace the projection step.
In Algorithm 1, the extrapolation-smoothing scheme (i.e., the (y, z)-step) is critical to the acceleration
of convergence. The acceleration is due to the fast running estimation of the unknown quantity
3
Algorithm 1 Accelerated Stochastic Compositional Proximal Gradient (ASC-PG)
K
Require: x1 2 <n , y0 2 <m , SO, K, stepsize sequences {?k }K
k=1 , and { k }k=1 .
K
Ensure: {xk }k=1
1: Initialize z1 = x1 .
2: for k = 1, ? ? ? , K do
3:
Query the SO and obtain gradient samples rfvk (yk ), rgwk (zk ).
4:
Update the main iterate by
xk+1
5:
prox?k R(?) xk
=
>
?k rgw
(xk )rfvk (yk ) .
k
Update auxillary iterates by an extrapolation-smoothing scheme:
?
?
1
1
zk+1 =
1
xk +
xk+1 ,
k
yk+1
=
(1
k )yk
k
+
k gwk+1 (zk+1 ),
where the sample gwk+1 (zk+1 ) is obtained via querying the SO.
6: end for
g(xk ) := Ew [gw (xk )]. At iteration k, the running estimate yk of g(xk ) is obtained using a weighted
smoothing scheme, corresponding to the y-step; while the new query point zk+1 is obtained through
extrapolation, corresponding to the z-step. The updates are constructed in a way such that yk is a
nearly unbiased estimate of g(xk ). To see how the extrapolation-smoothing scheme works, we let the
weights be
( Q
k
0
(k)
t
i ), if k > t
i=t+1 (1
?t =
(3)
,
if
k
=
t
0.
k
Then we can verify the following important relations:
xk+1 =
k
X
(k)
?t zt+1 ,
yk+1 =
t=0
k
X
(k)
?t gwt+1 (zt+1 ),
t=0
which essentially say that xk+1 is a damped weighted average of {zt+1 }k+1
and yk+1 is a damped
0
weighted average of {gwt+1 (zt+1 )}k+1
.
0
An Analytical Example of the Extrapolation-Smooth Scheme Now consider the special case
where gw (?) is always a linear mapping gw (z) = Aw z + bz and k = 1/(k + 1). We can verify that
(k)
?t = 1/(k + 1) for all t. Then we have
k
g(xk+1 ) =
k
1 X
E[Aw ]zt+1 + E[bw ],
k + 1 t=0
yk+1 =
In this way, we can see that the scaled error
k(yk+1
g(xk+1 )) =
k
X
E[Aw ])zt+1 +
(Awt+1
t=0
k
1 X
1 X
Awt+1 zt+1 +
bw .
k + 1 t=0
k + 1 t=0 t+1
k
X
(bwt+1
E[bw ])
t=0
is a zero-mean and zero-drift martingale. Under additional technical assumptions, we have
E[kyk+1
g(xk+1 )k2 ] ? O (1/k) .
Note that the zero-drift property of the error martingale is the key to the fast convergence rate. The
zero-drift property comes from the near-unbiasedness of yk , which is due to the special construction
of the extrapolation-smoothing scheme. In the more general case where gw is not necessarily linear,
we can use a similar argument to show that yk is a nearly unbiased estimate of g(xk ). As a result, the
extrapolation-smoothing (y, z)-step ensures that yk tracks the unknown quantity g(xk ) efficiently.
4
3
Main Results
We present our main theoretical results in this section. Let us begin by stating our assumptions. Note
that all assumptions involving random realizations of v, w hold with probability 1.
Assumption 1. The samples generated by the SO are unbiased in the following sense:
>
1. E{wk ,vk } (rgw
(x)rfvk (y)) = rg > (x)rf (y)
k
2. Ewk (gwk (x)) = g(x)
8k = 1, 2, ? ? ? , K,
8x, 8y.
8x.
Note that wk and vk are not necessarily independent.
Assumption 2. The sample gradients and values generated by the SO satisfy
g(x)k2 ) ?
Ew (kgw (x)
2
8x.
Assumption 3. The sample gradients generated by the SO are uniformly bounded, and the penalty
function R has bounded gradients.
krfv (x)k? ?(1),
krgw (x)k? ?(1),
Assumption 4. There exist LF , Lf , Lg > 0 such that
1. F (z)
F (x) ? hrF (x), z
2. krfv (y)
3. kg(x)
xi +
rfv (w)k? Lf ky
g(z)
rg(z)> (x
LF
2
wk
z)k?
kz
k@R(x)k? ?(1)
xk2
8x, 8w, 8v
8x 8z.
8y 8w 8v.
Lg
2 kx
zk2
8x 8z.
Our first main result concerns with general optimization problems which are not necessarily convex.
Theorem 1 (Smooth (Nonconvex) Optimization). Let Assumptions 1, 2, 3, and 4 hold. Denote
by F (x) := (Ev (fv ) Ew (gw ))(x) for short and suppose that R(x) = 0 in (1) and E(F (xk ))
is bounded from above. Choose ?k = k a and k = 2k b where a 2 (0, 1) and b 2 (0, 1) in
Algorithm 1. Then we have
PK
2
K
2
b
k=1 E(krF (xk )k )
? O(K a 1 + L2f Lg K 4b 4a Ilog
+ K a ).
(4)
4a 4b=1 + Lf K
K
If Lg 6= 0 and Lf 6= 0, choose a = 5/9 and b = 4/9, yielding
K
1X
E(krF (xk )k2 ) ? O(K
K
4/9
).
(5)
1/2
).
(6)
k=1
If Lg = 0 or Lf = 0, choose a = b = 1/2, yielding
K
1X
E(krF (xk )k2 ) ? O(K
K
k=1
The result of Theorem 1 strictly improves the best-known results given by Wang et al. [2016]. First
the result of (5) improves the finite-sample error bound from O(k 2/7 ) to O(k 4/9 ) for general
convex and nonconvex optimization. This improves the best known convergence rate and provides a
new benchmark for the stochastic composition problem. Note that it is possible to relax the condition
?E(F (xk )) is bounded from above" in Theorem 1. However, it would make the analysis more
cumbersome and yield an additional term log K in the error bound.
Our second main result concerns strongly convex objective functions. We say that the objective
function H is optimally strongly convex with parameter > 0 if
H(x)
H(PX ? (x))
kx
PX ? (x)k2
8x.
(7)
(see Liu and Wright [2015]). Note that any strongly convex function is optimally strongly convex, but
the reverse does not hold. For example, the objective function (2) in on-policy reinforcement learning
is always optimally strongly convex (even if E(A) is a rank deficient matrix), but not necessarily
strongly convex.
5
Theorem 2. (Strongly Convex Optimization) Suppose that the objective function H(x) in (1) is
optimally strongly convex with parameter > 0 defined in (7). Set ?k = Ca k a and k = Cb k b
where Ca > 4 , Cb > 2, a 2 (0, 1], and b 2 (0, 1] in Algorithm 1. Under Assumptions 1, 2, 3, and 4,
we have
E(kxK PX ? (xK )k2 ) ? O K a + L2f Lg K 4a+4b + L2f K b .
(8)
If Lg 6= 0 and Lf 6= 0, choose a = 1 and b = 4/5, yielding
PX ? (xK )k2 ) ? O(K
E(kxK
4/5
).
(9)
If Lg = 0 or Lf = 0, choose a = 1 and b = 1, yielding
PX ? (xK )k2 ) ? O(K
E(kxK
1
).
(10)
Let us discuss the results of Theorem 2. In the general case where Lf 6= 0 and Lg 6= 0, the
convergence rate in (9) is consistent with the result of Wang et al. [2016]. Now consider the special
case where Lg = 0, i.e., the inner mapping is linear. This result finds an immediate application to
Bellman error minimization problem (2) which arises from reinforcement learning problem in (and
with `1 norm regularization). The proposed ASC-PG algorithm is able to achieve the optimal rate
O(1/K) without any extra assumption on the outer function fv . To the best of our knowledge, this is
the best (also optimal) sample-error complexity for on-policy reinforcement learning.
Remarks Theorems 1 and 2 give important implications about the special cases where Lf = 0
or Lg = 0. In these cases, we argue that our convergence rate (10) is ?optimal" with respect to the
sample size K. To see this, it is worth pointing out the O(1/K) rate of convergence is optimal for
strongly convex expectation minimization problem. Because the expectation minimization problem
is a special case of the problem (1), the O(1/K) convergence rate must be optimal for the stochastic
composition problem too.
? Consider the case where Lf = 0, which means that the outer function fv (?) is linear with
probability 1. Then the stochastic composition problem (1) reduces to an expectation minimization
problem since (Ev fv Ew gw )(x) = Ev (fv (Ew gw (x))) = Ev Ew (fv gw )(x). Therefore, it makes
a perfect sense to obtain the optimal convergence rate.
? Consider the case where Lg = 0, which means that the inner function g(?) is a linear mapping.
The result is quite surprising. Note that even g(?) is a linear mapping, it does not reduce problem
(1) to an expectation minimization problem. However, the ASC-PG still achieves the optimal
convergence rate. This suggests that, when inner linearity holds, the stochastic composition
problem (1) is not fundamentally more difficult than the expectation minimization problem.
The convergence rate results unveiled in Theorems 1 and 2 are the best known results for the
composition problem. We believe that they provide important new result which provides insights into
the complexity of the stochastic composition problem.
4
Application to Reinforcement Learning
In this section, we apply the proposed ASC-PG algorithm to conduct policy value evaluation in
reinforcement learning through attacking Bellman equations. Suppose that there are in total S states.
Let the policy of interest be ?. Denote the value function of states by V ? 2 <S , where V ? (s) denotes
the value of being at state s under policy ?. The Bellman equation of the problem is
V ? (s1 ) = E? {rs1 ,s2 +
? V ? (s2 )|s1 } for all s1 , s2 2 {1, ..., S},
where rs1 ,s2 denotes the reward of moving from state s1 to s2 , and the expectation is taken over all
possible future state s2 conditioned on current state s1 and the policy ?. We have that the solution
V ? 2 <S to the above equation satisfies that V ? = V ? . Here a moderately large S will make solving
the Bellman equation directly impractical. To resolve the curse of dimensionality, in many practical
applications, we approximate the value of each state by some linear map of its feature s 2 <m ,
where d < S to reduce the dimension. In particular, we assume that V ? (s) ? Ts w? for some
w? 2 <m .
To compute w? , we formulate the problem as a Bellman residual minimization problem that
min
w
S
X
(
T
sw
s=1
6
q?,s0 (w))2 ,
ASC-PG
SCGD
GTD2-MP
6
??wk ? Xw? ?
?wk ? w? ?
8
4
2
0
0
3
6
k
6
3
3
6
4
k
9
?104
2.5
log(??wk ? w? ?)
2
log(?wk ? w? ?)
9
0
0
9
?10
ASC-PG
SCGD
GTD2-MP
12
1.5
1
0.5
ASC-PG
SCGD
GTD2-MP
0
-0.5
7.5
8
2
1.5
1
ASC-PG
SCGD
GTD2-MP
0.5
0
7.5
8.5
8
log(k)
8.5
log(k)
Figure 1: Empirical convergence rate of the ASC-PG algorithm and the GTD2-MP algorithm under
Experiment 1 averaged over 100 runs, where wk denotes the solution at the k-th iteration.
P
?
where q?,s0 (w) = E? {rs,s0 + ? s0 w} = s0 Pss
? s0 w); < 1 is a discount factor,
0 ({rs,s0 +
and rs,s0 is the random reward of transition from state s to state s0 . It is clearly seen that the proposed
ASC-PG algorithm could be directly applied to solve this problem where we take
g(w) = (
?
f (
T
1 w, q?,1 (w), ...,
T
1 w, q?,1 (w), ...,
T
S w, q?,S (w))
T
S w, q?,S (w))
?
=
S
X
s=1
(
sw
2 <2S ,
q?,s0 (w))2 2 <.
We point out that the g(?) function here is a linear map. By our theoretical analysis, we expect to
achieve a faster O(1/k) rate, which is justified empirically in our later simulation study.
We consider three experiments, where in the first two experiments, we compare our proposed
accelerated ASC-PG algorithm with SCGD algorithm [Wang et al., 2016] and the recently proposed
GTD2-MP algorithm [Liu et al., 2015]. Also, in the first two experiments, we do not add any
regularization term, i.e. R(?) = 0. In the third experiment, we add an `1 -penalization term kwk1 .
In all cases, we choose the step sizes via comparison studies as in Dann et al. [2014]:
? Experiment 1: We use the Baird?s example [Baird et al., 1995], which is a well-known example to
test the off-policy convergent algorithms. This example contains S = 6 states, and two actions at
each state. We refer the readers to Baird et al. [1995] for more detailed information of the example.
? Experiment 2: We generate a Markov decision problem (MDP) using similar setup as in White and
White [2016]. In each instance, we randomly generate an MDP which contains S = 100 states,
and three actions at each state. The dimension of the Given one state and one action, the agent can
move to one of four next possible states. In our simulation, we generate the transition probabilities
for each MDP instance uniformly from [0, 1] and normalize the sum of transitions to one, and we
generate the reward for each transition also uniformly in [0, 1].
? Experiment 3: We generate the data same as Experiment 2 except that we have a larger d = 100
dimensional feature space, where only the first 4 components of w? are non-zeros. We add an
`1 -regularization term, kwk1 , to the objective function.
Denote by wk the solution at the k-th iteration. For the first two experiments, we report the empirical
convergence performance kwk w? k and k wk
w? k, where = ( 1 , ..., S )T 2 <S?d and
w? = V , and all wk ?s are averaged over 100 runs, in the first two subfigures of Figures 1 and 2. It is
seen that the ASC-PG algorithm achieves the fastest convergence rate empirically in both experiments.
?
To further evaluate our theoretical results, we plot log(t) vs. log(kwk w? k) (or log(k wk
k)
averaged over 100 runs for the first two experiments in the second two subfigures of Figures 1 and
7
50
4
2
0
0
3
6
?10
-1
8.5
9
10
9.5
2
4
6
8
?104
k
0
8
20
0
0
1
-2
7.5
30
4
ASC-PG
SCGD
GTD2-MP
2
ASC-PG
SCGD
GTD2-MP
40
9
k
log(?wk ? w? ?)
??wk ? ?w? ?
ASC-PG
SCGD
GTD2-MP
log(??wk ? ?w? ?)
?wk ? w? ?
6
ASC-PG
SCGD
GTD2-MP
4
3
2
1
0
7.5
10
8
8.5
9
9.5
10
log(k)
log(k)
Figure 2: Empirical convergence rate of the ASC-PG algorithm and the GTD2-MP algorithm under
Experiment 2 averaged over 100 runs, where wk denotes the solution at the k-th iteration.
2. The empirical results further support our theoretical analysis that kwk
ASC-PG algorithm when g(?) is a linear mapping.
w? k2 = O(1/k) for the
For Experiment 3, as the optimal solution is unknown, we run the ASC-PG algorithm for one million
? ? , and we report kwk w
? ?k
iterations and take the corresponding solution as the optimal solution w
?
?
and k wk
w k averaged over 100 runs in Figure 3. It is seen the the ASC-PG algorithm achieves
fast empirical convergence rate.
5
? ??
??wk ? ?w
4
? ??
?wt ? w
25
lambda = 1e-3
lambda = 5e-4
3
2
1
0
0
2
4
6
k
15
10
5
0
0
8
?104
lambda = 1e-3
lambda = 5e-4
20
2
4
6
t
8
4
?10
Figure 3: Empirical convergence rate of the ASC-PG algorithm with the `1 -regularization term kwk1
under Experiment 3 averaged over 100 runs, where wk denotes the solution at the t-th iteration.
5
Conclusion
We develop a proximal gradient method for the penalized stochastic composition problem. The
algorithm updates by interacting with a stochastic first-order oracle. Convergence rates are established
under a variety of assumptions, which provide new rate benchmarks. Application of the ASCPG to reinforcement learning leads to a new on-policy learning algorithm, which achieves faster
convergence than the best known algorithms. For future research, it remains open whether or under
what circumstances the current O(K 4/9 ) can be further improved. Another direction is to customize
and adapt the algorithm and analysis to more specific problems arising from reinforcement learning
and risk-averse optimization, in order to fully exploit the potential of the proposed method.
Acknowledgments
This project is in part supported by NSF grants CNS-1548078 and DMS-10009141.
8
References
L. Baird et al. Residual algorithms: Reinforcement learning with function approximation. In
Proceedings of the twelfth international conference on machine learning, pages 30?37, 1995.
A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM journal on imaging sciences, 2(1):183?202, 2009.
D. P. Bertsekas. Incremental proximal methods for large scale convex optimization. Mathematical
Programming, Ser. B, 129:163?195, 2011.
B. Dai, N. He, Y. Pan, B. Boots, and L. Song. Learning from conditional distributions via dual kernel
embeddings. arXiv preprint arXiv:1607.04579, 2016.
C. Dann, G. Neumann, and J. Peters. Policy evaluation with temporal differences: A survey and
comparison. The Journal of Machine Learning Research, 15(1):809?883, 2014.
D. Dentcheva, S. Penev, and A. Ruszczynski. Statistical estimation of composite risk functionals and
risk optimization problems. arXiv preprint arXiv:1504.02658, 2015.
Y. M. Ermoliev. Methods of Stochastic Programming. Monographs in Optimization and OR, Nauka,
Moscow, 1976.
S. Ghadimi and G. Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic
programming. Mathematical Programming, pages 1?41, 2015.
M. Gurbuzbalaban, A. Ozdaglar, and P. Parrilo. On the convergence rate of incremental aggregated
gradient algorithms. arXiv preprint arXiv:1506.02081, 2015.
B. Liu, J. Liu, M. Ghavamzadeh, S. Mahadevan, and M. Petrik. Finite-sample analysis of proximal
gradient td algorithms. In Proc. The 31st Conf. Uncertainty in Artificial Intelligence, Amsterdam,
Netherlands, 2015.
J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence
properties. SIAM Journal on Optimization, 25(1):351?376, 2015.
A. Nedi?c. Random algorithms for convex minimization problems. Mathematical Programming, Ser.
B, 129:225?253, 2011.
A. Nedi?c and D. P. Bertsekas. Incremental subgradient methods for nondifferentiable optimization.
SIAM Journal on Optimization, 12:109?138, 2001.
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19:1574?1609, 2009.
A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex
stochastic optimization. In Proceedings of the 29th International Conference on Machine Learning,
pages 449?456, 2012.
O. Shamir and T. Zhang. Stochastic gradient descent for non-smooth optimization: Convergence
results and optimal averaging schemes. In Proceedings of The 30th International Conference on
Machine Learning, pages 71?79, 2013.
R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT press, 1998.
M. Wang and D. P. Bertsekas. Stochastic first-order methods with random constraint projection.
SIAM Journal on Optimization, 26(1):681?717, 2016.
M. Wang and J. Liu. A stochastic compositional subgradient method using Markov samples. Proceedings of Winter Simulation Conference, 2016.
M. Wang, Y. Chen, J. Liu, and Y. Gu. Random multi-constraint projection: Stochastic gradient
methods for convex optimization with many constraints. arXiv preprint arXiv:1511.03760, 2015.
M. Wang, X. Fang, and H. Liu. Stochastic compositional gradient descent: Algorithms for minimizing
compositions of expected-value functions. Mathematical Programming Series A, 2016.
A. White and M. White. Investigating practical, linear temporal difference learning. arXiv preprint
arXiv:1602.08771, 2016.
9
| 6438 |@word version:1 norm:4 stronger:1 twelfth:1 open:1 r:4 simulation:5 pg:32 liu:12 contains:2 series:1 current:3 com:1 deteriorating:1 surprising:1 gmail:1 must:1 numerical:2 plot:1 update:5 juditsky:1 v:1 intelligence:1 kyk:2 xk:25 short:2 bwt:1 provides:3 iterates:1 zhang:2 mathematical:4 constructed:1 x0:1 expected:9 blackbox:1 simulator:1 multi:1 bellman:9 td:1 resolve:1 curse:1 l2f:3 becomes:1 spain:1 estimating:3 linearity:2 notation:1 begin:1 bounded:4 project:1 what:1 kg:1 interpreted:1 substantially:1 emerging:1 impractical:1 guarantee:2 temporal:2 firstorder:1 k2:10 scaled:1 ser:2 control:1 ozdaglar:1 grant:1 bertsekas:6 limit:1 despite:1 sutton:2 black:1 studied:3 suggests:1 fastest:1 limited:1 nemirovski:2 bi:3 averaged:6 practical:2 acknowledgment:1 lf:12 empirical:6 auxillary:1 significantly:1 composite:1 projection:4 cannot:1 onto:1 risk:5 ghadimi:2 map:2 starting:1 convex:23 nedi:4 ke:1 formulate:1 survey:1 insight:1 fang:2 coordinate:1 shamir:3 suppose:4 construction:1 programming:7 preprint:5 wang:18 ensures:1 averse:3 yk:15 monograph:1 environment:2 complexity:7 moderately:1 reward:5 ghavamzadeh:1 solving:5 rewrite:1 mengdi:1 petrik:1 upon:1 gu:1 easily:1 joint:1 various:2 fast:5 query:5 artificial:1 quite:1 larger:1 valued:1 solve:1 say:2 relax:1 otherwise:1 timescale:2 noisy:2 online:4 sequence:2 differentiable:1 analytical:1 propose:4 realization:2 achieve:2 normalize:1 ky:1 convergence:33 p:2 neumann:1 incremental:4 converges:1 perfect:1 develop:2 stating:1 involves:1 implies:1 come:1 direction:1 stochastic:47 require:2 generalization:1 strictly:1 hold:4 considered:1 wright:2 cb:2 algorithmic:1 mapping:6 pointing:1 achieves:8 xk2:1 estimation:5 proc:1 tool:1 weighted:3 minimization:15 mit:1 clearly:1 always:2 shrinkage:1 barto:2 broader:1 focus:3 vk:2 rank:1 ps:1 sense:3 relation:1 quasi:2 dual:1 smoothing:6 special:12 initialize:1 equal:1 psu:1 sampling:4 broad:1 nearly:2 future:2 nonsmooth:3 report:2 fundamentally:1 randomly:1 winter:1 comprehensive:1 beck:2 bw:3 cns:1 penev:1 organization:1 interest:1 asc:32 evaluation:3 analyzed:1 yielding:4 damped:2 chain:1 implication:1 unveiled:1 conduct:2 euclidean:2 theoretical:6 subfigure:2 complicates:1 instance:2 earlier:1 teboulle:2 too:1 motivating:3 optimally:4 aw:3 corrupted:1 proximal:12 unbiasedness:1 st:1 fundamental:1 international:3 siam:5 off:1 continuously:1 central:1 satisfied:1 choose:6 lambda:4 conf:1 return:4 potential:1 prox:1 parrilo:1 summarized:1 wk:20 baird:4 satisfy:1 mp:11 dann:2 later:1 extrapolation:7 closed:1 analyze:1 kwk:4 rochester:1 contribution:3 square:3 improvable:1 variance:2 efficiently:1 yield:1 worth:1 cumbersome:1 definition:1 dm:1 tostate:1 sampled:2 treatment:1 popular:1 knowledge:2 improves:4 dimensionality:1 ea:1 actually:1 back:1 improved:1 box:1 strongly:13 nonlinear:3 mdp:3 believe:1 contain:1 unbiased:4 verify:2 true:1 regularization:7 deal:1 gw:15 white:4 customize:1 demonstrate:1 bring:1 recently:1 functional:1 ji:2 empirically:2 million:1 he:1 refer:1 composition:27 ai:3 moving:1 access:1 longer:2 add:3 recent:1 reverse:1 nonconvex:5 kgw:1 kwk1:3 seen:3 dai:2 additional:3 attacking:1 aggregated:1 reduces:1 smooth:6 technical:1 faster:3 adapt:1 involving:3 variant:1 essentially:2 expectation:11 circumstance:1 bz:1 arxiv:10 iteration:10 kernel:2 justified:1 rfv:2 extra:1 sure:1 deficient:1 rs1:2 contrary:1 sridharan:1 near:1 mahadevan:1 embeddings:1 iterate:1 variety:1 pennsylvania:1 inner:7 idea:1 reduce:2 whether:1 casted:1 accelerating:1 penalty:4 song:1 peter:1 reformulated:1 compositional:8 remark:1 action:3 deep:1 gtd2:11 detailed:1 netherlands:1 discount:2 extensively:1 generate:5 shapiro:1 exist:1 nsf:1 arising:2 popularity:1 per:1 track:1 write:1 key:1 four:1 lan:3 traced:1 krf:3 imaging:1 subgradient:4 monotone:1 sum:2 run:7 inverse:1 parameterized:1 uncertainty:1 throughout:1 almost:1 reader:1 decision:1 bound:3 convergent:1 oracle:7 constraint:3 x2:2 argument:1 min:5 px:5 conjugate:1 describes:1 pan:1 y0:1 making:1 s1:5 taken:1 equation:8 remains:1 awt:2 discus:1 end:1 zk2:1 available:2 apply:4 stepsize:1 batch:3 denotes:6 running:3 ensure:1 moscow:1 maintaining:1 sw:2 xw:1 exploit:1 establish:1 objective:15 move:1 quantity:2 exhibit:2 gradient:28 outer:4 nondifferentiable:1 argue:1 barely:1 minimizing:2 lg:12 difficult:1 setup:1 vara:1 dentcheva:2 proper:1 policy:12 unknown:4 zt:7 upper:1 boot:1 observation:1 ilog:1 markov:4 benchmark:3 finite:2 descent:4 t:1 immediate:1 extended:1 interacting:1 drift:3 namely:1 ethan:1 z1:1 fv:14 established:2 barcelona:1 nip:1 address:1 able:2 parallelism:1 ev:6 rf:1 critical:1 difficulty:1 indicator:1 residual:2 scheme:7 improve:1 ruszczynski:1 coupled:1 prior:1 loss:3 expect:1 fully:1 querying:1 penalization:1 agent:1 sufficient:1 consistent:1 s0:10 thresholding:1 penalized:2 supported:1 transpose:1 asynchronous:1 allow:1 wide:1 ermoliev:2 dimension:2 transition:7 kz:1 reinforcement:14 functionals:1 approximate:1 investigating:1 xi:1 spectrum:1 iterative:1 zk:7 robust:1 ca:2 controllable:1 necessarily:5 pk:1 timescales:1 main:5 s2:6 noise:2 succinct:1 x1:2 martingale:3 hrf:1 third:1 theorem:9 specific:1 showing:1 list:1 rakhlin:2 concern:2 exists:2 conditioned:1 kx:2 chen:1 suited:1 rg:2 saddle:2 kxk:5 amsterdam:1 uwisc:1 partially:1 applies:1 satisfies:1 conditional:1 viewed:1 acceleration:2 replace:1 typical:1 except:2 specifically:1 uniformly:3 wt:1 averaging:1 total:2 ewk:1 ew:8 nauka:1 support:1 gwk:3 arises:1 accelerated:7 evaluate:1 princeton:2 |
6,012 | 6,439 | Bayesian optimization under mixed constraints with a
slack-variable augmented Lagrangian
Victor Picheny
MIAT, Universit? de Toulouse, INRA
Castanet-Tolosan, France
victor.picheny@toulouse.inra.fr
Stefan Wild
Argonne National Laboratory
Argonne, IL, USA
wildmcs.anl.gov
Robert B. Gramacy
Virginia Tech
Blacksburg, VA, USA
rbg@vt.edu
S?bastien Le Digabel
?cole Polytechnique de Montr?al
Montr?al, QC, Canada
sebastien.le-digabel@polymtl.ca
Abstract
An augmented Lagrangian (AL) can convert a constrained optimization problem
into a sequence of simpler (e.g., unconstrained) problems, which are then usually
solved with local solvers. Recently, surrogate-based Bayesian optimization (BO)
sub-solvers have been successfully deployed in the AL framework for a more global
search in the presence of inequality constraints; however, a drawback was that
expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce
an alternative slack variable AL, and show that in this formulation the EI may be
evaluated with library routines. The slack variables furthermore facilitate equality
as well as inequality constraints, and mixtures thereof. We show our new slack
?ALBO? compares favorably to the original. Its superiority over conventional
alternatives is reinforced on several mixed constraint examples.
1
Introduction
Bayesian optimization (BO), as applied to so-called blackbox objectives, is a modernization of 197080s statistical response surface methodology for sequential design [3, 14]. In BO, nonparametric
(Gaussian) processes (GPs) provide flexible response surface fits. Sequential design decisions, socalled acquisitions, judiciously balance exploration and exploitation in search for global optima. For
reviews, see [5, 4]; until recently this literature has focused on unconstrained optimization.
Many interesting problems contain constraints, typically specified as equalities or inequalities:
min {f (x) : g(x) ? 0, h(x) = 0, x ? B} ,
x
(1)
where B ? Rd is usually a bounded hyperrectangle, f : Rd ? R is a scalar-valued objective function,
and g : Rd ? Rm and h : Rd ? Rp are vector-valued constraint functions taken componentwise
(i.e., gj (x) ? 0, j = 1, . . . , m; hk (x) = 0, and k = 1, . . . , p). The typical setup treats f , g, and h as
a ?joint? blackbox, meaning that providing x to a single computer code reveals f (x), g(x), and h(x)
simultaneously, often at great computational expense. A common special case treats f (x) as known
(e.g., linear); however the problem is still hard when g(x) ? 0 defines a nonconvex valid region.
Not many algorithms target global solutions to this general, constrained blackbox optimization
problem. Statistical methods are acutely few. We know of no methods from the BO literature natively
accommodating equality constraints, let alone mixed (equality and inequality) ones. Schonlau et al.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
[21] describe how their expected improvement (EI) heuristic can be extended to multiple inequality
constraints by multiplying by an estimated probability of constraint satisfaction. Here, we call this
expected feasible improvement (EFI). EFI has recently been revisited by several authors [23, 7, 6].
However, the technique has pathological behavior in otherwise idealized setups [9], which is related
to a so-called ?decoupled? pathology [7]. Some recent information-theoretic alternatives have shown
promise in the inequality constrained setting [10, 17].
We remark that any problem with equality constraints can be ?transformed? to inequality constraints
only, by applying h(x) ? 0 and h(x) ? 0 simultaneously. However, the effect of such a reformulation
is rather uncertain. It puts double-weight on equalities and violates certain regularity (i.e., constraint
qualification [15]) conditions. Numerical issues have been reported in empirical work [1, 20].
In this paper we show how a recent BO method for inequality constraints [9] is naturally enhanced to
handle equality constraints, and therefore mixed ones too. The method involves converting inequality
constrained problems into a sequence of simpler subproblems via the augmented Lagrangian (AL, [2]).
AL-based solvers can, under certain regularity conditions, be shown to converge to locally optimal
solutions that satisfy the constraints, so long as the sub-solver converges to local solutions. By
deploying modern BO on the subproblems, as opposed to the usual local solvers, the resulting
meta-optimizer is able to find better, less local solutions with fewer evaluations of the expensive
blackbox, compared to several classical and statistical alternatives. Here we dub that method ALBO.
To extend ALBO to equality constraints, we suggest the opposite transformation to the one described
above: we convert inequality constraints into equalities by introducing slack variables. In the context
of earlier work with the AL, via conventional solvers, this is rather textbook [15, Ch. 17]. Handling
the inequalities in this way leads naturally to solutions for mixed constraints and, more importantly,
dramatically improves the original inequality-only version. In the original (non-slack) ALBO setup,
the density and distribution of an important composite random predictive quantity is not known
in closed form. Except in a few particular cases [18], calculating EI and related quantities under
the AL required Monte Carlo integration, which means that acquisition function evaluations are
computationally expensive, noisy, or both. A reformulated slack-AL version emits a composite that
has a known distribution, a so-called weighted non-central Chi-square (WNCS) distribution. We show
that, in that setting, EI calculations involve a simple 1-d integral via ordinary quadrature. Adding
slack variables increases the input dimension of the optimization subproblems, but only artificially so.
The effects of expansion can be mitigated through optimal default settings, which we provide.
The remainder of the paper is organized as follows. Section 2 outlines the components germane to the
ALBO approach: AL, Bayesian surrogate modeling, and acquisition via EI. Section 3 contains the
bulk of our methodological contribution: a slack variable AL, a closed form EI, optimal default slack
settings, and open-source software. Implementation details are provided by our online supplementary
material. Section 4 provides empirical comparisons, and Section 5 concludes.
2
A review of relevant concepts: EI and AL
EI: The canonical acquisition function in BO is expected improvement (EI) [12]. Consider a surrogate
f n (x), trained on n pairs (xi , yi = f (xi )) emitting Gaussian predictive equations with mean ?n (x)
n
and standard deviation ? n (x). Define fmin
= mini=1,...,n yi , the smallest y-value seen so far, and
n
let I(x) = max{0, fmin ? Y (x)} be the improvement at x. I(x) is largest when Y (x) ? f n (x) has
n
substantial distribution below fmin
. The expectation of I(x) over Y (x) has a convenient closed form,
n
revealing balance between exploitation (?n (x) under fmin
) and exploration (large ? n (x)):
n
n
fmin ? ?n (x)
fmin ? ?n (x)
n
n
E{I(x)} = (fmin
? ?n (x))?
+
?
(x)?
,
(2)
? n (x)
? n (x)
where ? (?) is the standard normal cdf (pdf). Accurate, approximately Gaussian predictive equations
are provided by many statistical models (e.g., GPs). In non-Gaussian contexts, Monte Carlo schemes?
sampling Y (x)?s and averaging I(x)?s?offer a computationally intensive alternative.
AL: Although several authors have suggested extensions to EI for constraints, the BO literature has
primarily focused on unconstrained problems. The range of constrained BO options was recently
extended by borrowing an apparatus from the mathematical optimization literature, the augmented
Lagrangian, allowing unconstrained methods to be adapted to constrained problems. The AL, as a
2
device for solving problems with inequality constraints (no h(x) in Eq. (1)), may be defined as
m
1 X
2
>
max {0, gj (x)} ,
(3)
LA (x; ?, ?) = f (x) + ? g(x) +
2? j=1
where ? > 0 is a penalty parameter on constraint violation and ? ? Rm
+ serves as a Lagrange
multiplier. AL methods are iterative, involving a particular sequence of (x; ?, ?). Given the current
values ?k?1 and ?k?1 , one approximately solves the subproblem
min LA (x; ?k?1 , ?k?1 ) : x ? B ,
(4)
x
via a conventional (bound-constrained) solver. The parameters (?, ?) are updated depending on the
nature of the solution found, and the process repeats. The particulars in our setup are provided in
Alg. 1; for more details see [15, Ch. 17]. Local convergence is guaranteed under relatively mild
conditions involving the choice of subroutine solving (4). Loosely, all that is required is that the solver
?makes progress? on the subproblem. In contexts where termination depends more upon computational
budget than on a measure of convergence, as in many BO problems, that added flexibility is welcome.
However, the AL does not typically
0
0
enjoy global scope. The local min- Require: ? ? 0, ? > 0
ima found by the method are sen- 1: for k = k1, 2, . . . do
solve (4)
sitive to initialization?of starting 2: Let x (approximately)
k?1
1
k
0 0
0
g (xk )}, j = 1, . . . , m
3:
Set
?
=
max{0,
?
+
choices for (? , ? ) or x ; local
j
j
?k?1 j
searches in iteration k are usually 4: If g(xk ) ? 0, set ?k = ?k?1 ; else, set ?k = 12 ?k?1
started from xk?1 . However, this 5: end for
dependence is broken when statistiAlgorithm 1: Basic augmented Lagrangian method
cal surrogates drive search for solutions to the subproblems. Independently fit GP surrogates, f n (x) for the objective and
n
g n (x) = (g1n (x), . . . , gm
(x)) for the constraints, yield predictive distributions for Yfn (x) and
Ygn (x) = (Ygn1 (x), . . . , Ygnm (x)). Dropping the n superscripts, the AL composite random variable
Pm
1
2
Y (x) = Yf (x) + ?> Yg (x) + 2?
j=1 max{0, Ygj (x)} can serve as a surrogate for (3); however,
it is difficult to deduce its distribution from the components of Yf and Yg , even when those are
independently Gaussian. While its mean is available in closed form, EI requires Monte Carlo.
3
A novel formulation involving slack variables
An equivalent formulation of (1) involves introducing slack variables, sj , for j = 1, . . . , m (i.e., one
for each inequality constraint gj (x)), and converting the mixed constraint problem (1) to one with
only equality constraints (plus bound constraints for sj ): gj (x) ? sj = 0, sj ? R+ , for j = 1, . . . , m.
Observe that introducing the slack "inputs" increases dimension of the problem from d to d + m.
Reducing a mixed constraint problem to one involving only equality and bound constraints is valuable
insofar as one has good solvers for those problems. Suppose, for the moment, that the original
problem (1) has no equality constraints (i.e., p = 0). In this case, a slack variable-based AL method
is readily available?as an alternative to the description in Section 2. Although we frame it as an
?alternative?, some would describe this as the standard version [see, e.g., 15, Ch. 17]. The AL is
m
1 X
2
LA (x, s; ?g , ?) = f (x) + ?> (g(x)+s) +
(gj (x)+sj ) .
(5)
2? j=1
This formulation is more convenient than (3) because the ?max? is missing, but the extra slack
variables mean solving a higher (d + m) dimensional subproblem compared to (4). That AL can be
expanded to handle equality (and thereby mixed constraints) as follows:
?
?
p
m
X
X
1
2
>
?
LA (x, s; ?g , ?h , ?) = f (x)+?>
(gj (x)+sj ) +
hk (x)2 ?. (6)
g (g(x)+s)+?h h(x)+
2? j=1
k=1
> > >
>
> >
Defining c(x) := g(x) , h(x) , ? := ?g , ?h , and enlarging the dimension of s with the
understanding that sm+1 = ? ? ? = sm+p = 0, leads to a streamlined AL for mixed constraints
LA (x, s; ?, ?) = f (x) + ?> (c(x) + s) +
3
m+p
1 X
2
(cj (x) + sj ) ,
2? j=1
(7)
with ? ? Rm+p . A non-slack AL formulation (3) can analogously be written as
?
?
p
m
X
X
1
2
>
?
LA (x; ?g , ?h , ?) = f (x) + ?>
max {0, gj (x)} +
hk (x)2 ? ,
g g(x) + ?h h(x) +
2? j=1
k=1
p
with ?g ? Rm
+ and ?h ? R . Eq. (7), by contrast, is easier to work with because it is a smooth
quadratic in the objective (f ) and constraints (c). In what follows, we show that (7) facilitates
calculation of important quantities like EI, in the GP-based BO framework, via a library routine. So
slack variables not only facilitate mixed constraints in a unified framework, but they also lead to a
more efficient handling of the original inequality (only) constrained problem.
3.1
Distribution of the slack-AL composite
If Yf and Yc1 , . . . , Ycm+p represent random predictive variables from m + p + 1 surrogates fitted to
n realized objective and constraint evaluations, then the analogous slack-AL random variable is
Y (x, s) = Yf (x) +
m+p
X
j=1
m+p
1 X
?j (Ycj (x) + sj ) +
(Ycj (x) + sj )2 .
2? j=1
(8)
As for the original AL, the mean of this RV has a simple closed form in terms of the means and
variances of surrogates. In the Gaussian case, we show that we can obtain a closed form for the full
distribution of the slack-AL variate (8). Toward that aim, first rewrite Y as:
Y (x, s) = Yf (x) +
m+p
X
j=1
= Yf (x) +
m+p
X
m+p
m+p
1 X 2
1 X
?j sj +
s +
2?j ?Ycj (x) + 2sj Ycj (x) + Ycj (x)2
2? j=1 j 2? j=1
?j sj +
j=1
m+p
m+p
i
2
1 Xh
1 X 2
sj +
?j + Ycj (x) ? ?j2 ,
2? j=1
2? j=1
with ?j = ?j ? + sj . Now decompose the Y (x, s) into a sum of three quantities:
1
W (x, s), with
2?
m+p
m+p
m+p
X
1 X 2
1 X 2
s ?
?
r(s) =
?j sj +
2? j=1 j 2? j=1 j
j=1
Y (x, s) = Yf (x) + r(s) +
(9)
and W (x, s) =
m+p
X
?j + Ycj (x)
2
.
j=1
Using Ycj ? N ?cj (x), ?c2j (x) , i.e., leveraging Gaussianity, W can be written as
W (x, s) =
m+p
X
?c2j (x)Xj (x, s),
with Xj (x, s) ? ?
j=1
2
?cj (x) + ?j
dof = 1, ? =
?cj (x)
2 !
. (10)
The line above is the expression of a weighted sum of non-central chi-square (WSNC) variates.
Each of the m + p variates involves a unit degrees-of-freedom (dof) parameter, and a non-centrality
parameter ?. A number of efficient methods exist for evaluating the density, distribution, and quantile
functions of WSNC random variables. Details and code are provided in our supplementary materials.
Some constrained optimization problems involve a known objective f (x). In that case, referring back
to (9), we are done: Y (x, s) is WSNC (as in (10)) shifted by a known quantity f (x) + r(s). When
? (x, s) = Yf (x) + 1 W (x, s) is the weighted sum of a Gaussian
Yf (x) is conditionally Gaussian, W
2?
and WNCS variates, a problem that is again well-studied?see the supplementary material.
3.2
Slack-AL expected improvement
Evaluating
EI at candidate (x, s) locations
under the AL-composite involves working with EI(x, s) =
n
n
n } , given the current minimum y
E (ymin
? Y (x, s)) I{Y (x,s)?ymin
min of the AL over all n runs.
4
n
n
When f (x) is known, let wmin
(x, s) = 2? (ymin
? f (x) ? r(s)) absorb all of the non-random
quantities involved in the EI calculation. Then, with DW (?; x, s) denoting the distribution of W (x, s),
1 n
E (wmin (x, s) ? W (x, s)) IW (x,s)?wmin (x,s)
2?
n
n
Z wmin
Z wmin
(x,s)
(x,s)
1
1
=
DW (t; x, s)dt =
DW (t; x, s)dt
2? ??
2? 0
EI(x, s) =
(11)
n
if wmin
(x, s) ? 0 and zero otherwise. That is, the EI boils down to integrating the distribution function
n
of W (x, s) between 0 (since W is positive) and wmin
(x, s). This is a one-dimensional definite integral
that is easy to approximate via quadrature; details are in the supplementary material. Since W (x, s) is
quadratic in the Yc (x) values, it is often the case, especially for smaller ?-values in later AL iterations,
n
that DW (t; x, s) is zero over most of [0, wmin
(x, s)], simplifying numerical integration. However,
this has deleterious impacts on search over (x, s), as we discuss in our supplement. When f (x) is
n
n
unknown and Yf (x) is conditionally normal, let w
?min
(s) = 2? (ymin
? r(s)). Then,
n
Z w?min
(s)
i
1 h n
1
?
EI(x, s) =
E w
?min (s) ? W (x, s) IW
=
DW
n (s)
? (t; x, s)dt.
? (x,s)?w
?min
2?
2? ??
Here the lower bound of the definite integral cannot be zero since Yf (x) may be negative, and thus
? (x, s) may have non-zero distribution for negative t-values. This can challenge the numerical
W
quadrature , although many library functions allow indefinite bounds. We obtain better performance
by supplying a conservative finite lower bound, for example three standard deviations in Yf (x), in
units of the penalty (2?), below zero: 6??f (x). Implementation details are in our supplement.
3.3
AL updates, optimal slack settings, and other implementation notes
The new slack-AL method is completed by describing when the subproblem (7) is deemed to be
?solved? (step 2 in Alg. 1), how ? and ? updated (steps 3?4). We terminate the BO search sub-solver
after a single iteration as this matches with the spirit of EI-based search, whose choice of next location
can be shown to be optimal, in a certain sense, if it is the final point being selected. It also meshes well
with an updating scheme analogous to that in steps 3?4: updating only when no actual improvement
(in terms of constraint violation) is realized by that choice. That is,
n
o
step 2: Let (xk , sk ) approx. solve minx,s LA (x, s; ?k?1 , ?k?1 ) : (x, s1:m ) ? B?
1
(cj (xk ) + skj ), for j = 1, . . . , m + p
step 3: ?kj = ?k?1
+ ?k?1
j
k
step 4: If c1:m (x ) ? 0 and |cm+1:m+p (xk )| ? , set ?k=?k?1 ; else ?k = 21 ?k?1
Above, step 3 is the same as in Alg. 1 except without the ?max?, and with slacks augmenting the
constraint values. The ?if? statement in step 4 checks for validity at xk , deploying a threshold > 0
on equality constraints; further discussion of the threshold is deferred to Section 4, where we discuss
progress metrics under mixed constraints. If validity holds at (xk , sk ), the current AL iteration is
deemed to have ?made progress? and the penalty remains unchanged; otherwise it is doubled. An
alternate formulation may check |c1:m (xk ) + sk1:m | ? . We find that the version in step 4, above, is
cleaner because it limits sensitivity to the choice of threshold . In our supplementary material we
recommend initial (?0 , ?0 ) values which are analogous to the original, non-slack AL settings.
Optimal choice of slacks: The biggest difference between the original AL (3) and slack-AL (7)
is that the latter requires searching over both x and s, whereas the former involves only x-values.
In what follows we show that there are automatic choices for the s-values as a function of the
corresponding x?s, keeping the search space d-dimensional, rather than d + m.
For an observed cj (x) value, associated slack variables minimizing the AL (7) can be P
obtained analytm
m
ically. Using the form of (9), observe that mins?Rm
y(x,
s)
is
equivalent
to
min
s?R
j=1 2?j ?sj +
+
+
s2j + 2sj cj (x). For fixed x, this is strictly convex in s. Therefore, its unconstrained minimum can only
be its stationary point, which satisfies 0 = 2?j ? + 2s?j (x) + 2cj (x), for j = 1, . . . , m. Accounting
for the nonnegativity constraint, we obtain the following optimal slack as a function of x:
s?j (x) = max {0, ??j ? ? cj (x)} ,
5
j = 1, . . . , m.
(12)
Above we write s? as a function of x to convey that x remains a ?free? quantity in y(x, s? (x)). Recall
that slacks on equality constraints are zero, sk (x) = 0, k = m + 1, . . . , m + p, for all x.
In the blackbox c(x) setting, y(x, s? (x)) is only directly accessible at the data locations xi . At
other x-values, however, the surrogates provide a useful approximation. When Yc (x) is (approximately) Gaussian it is straightforward to show that the optimal setting of the slack variables, solving
mins?Rm
E[Y (x, s)], are s?j (x) = max{0, ??j ? ? ?cj (x)}, i.e., the same as (12) with a prediction
+
?cj (x) for Ycj (x), the unknown cj (x) value. Again, slacks on the equality constraints are set to zero.
Other criteria can be used to choose slack variables. Instead of minimizing the mean of the composite,
one could maximize the EI. In our supplementary material we explain how this is of dubious practical
value, being more computationally intensive and providing near identical results in practice.
Implementation notes: Code supporting all methods in this manuscript is provided in two opensource R packages: laGP [8] and DiceOptim [19], both on CRAN [22]. Implementation details vary
somewhat across those packages, due primarily to particulars of their surrogate modeling capability
and how they search the EI surface. For example, laGP can accommodate a smaller initial design
size because it learns fewer parameters (i.e., has fewer degrees of freedom). DiceOptim uses a
multi-start search procedure for EI, whereas laGP deploys a random candidate grid, which may
optionally be ?finished? with an L-BFGS-B search. Nevertheless, their qualitative behavior exhibits
strong similarity. Both packages also implement the original AL scheme (i.e., without slack variables)
updated (6) for mixed constraints. Further details are provided in our supplementary material.
4
Empirical comparison
Here we describe three test problems, each mixing challenging elements from traditional unconstrained blackbox optimization benchmarks, but in a constrained optimization format. We run our
optimizers on these problems 100 times under random initializations. In the case of our GP surrogate
comparators, this initialization involves choosing random space-filling designs. Our primary means
of comparison is an averaged (over the 100 runs) measure of progress defined by the best valid value
of the objective for increasing budgets (number of evaluations of the blackbox), n.
In the presence of equality constraints it is necessary to relax this definition somewhat, as the valid
set may be of measure zero. In such cases we choose a tolerance ? 0 and declare a solution to be
?valid? when inequality constraints are all valid, and when |hk (x)| < for all k = 1, . . . , p. In our
figures we choose = 10?2 ; however, the results are similar under stronger thresholds, with a higher
variability over initializations. As finding a valid solution is, in itself, sometimes a difficult task, we
additionally report the proportion of runs that find valid and optimal solutions as a function of budget,
n, for problems with equality (and mixed) constraints.
4.1
An inequality constrained problem
We first revisit the ?toy? problem from [9], having a 2d input space limited to the unit cube, a (known)
linear objective, with sinusoidal and quadratic inequality constraints (henceforth the LSQ problem;
see the supplementary material for details). Figure 1 shows progress over repeated solves with a
maximum budget of 40 blackbox evaluations. The left-hand plot in Figure 1 tracks the average best
valid value of the objective found over the iterations, using the progress metric described above.
Random initial designs of size n = 5 were used, as indicated by the vertical-dashed gray line. The
solid gray lines are extracted from a similar plot from [9], containing both AL-based comparators,
and several from the derivative-free optimization and BO literatures. The details are omitted here.
Our new ALBO comparators are shown in thicker colored lines; the solid black line is the original
AL(BO)-EI comparator, under a revised (compared to [9]) initialization and updating scheme. The
two red lines are variations on the slack-AL algorithm under EI: with (dashed) and without (solid)
L-BFGS-B optimizing EI acquisition at each iteration. Finally, the blue line is PESC [10], using the
Python library available at https://github.com/HIPS/Spearmint/tree/PESC. The take-home
message from the plot is that all four new methods outperform those considered by the original ALBO
paper [9]. Focusing on the new comparators only, observe that their progress is nearly statistically
equivalent during the first 20 iterations. However, in the latter iterations stark distinctions emerge,
with Slack-AL+optim and PESC, both leveraging L-BFGS-B subroutines, outperforming. This
6
?4
log utility gap
?5
1.1
1.0
0.9
?7
0.8
0.6
0.7
best valid objective (f)
Original AL
Slack AL
Slack AL + optim
PESC
?6
1.2
Initial Design
Gramacy, et al. (2016)
0
10
20
30
40
20
25
blackbox evaluations (n)
30
35
40
blackbox evaluations (n)
Figure 1: Results on the LSQ problem with initial designs of size n = 10. The left panel shows
the best valid value of the objective over the first 40 evaluations, whereas the right shows the log
utility-gap for the second 20 evaluations. The solid gray lines show comparators from [9].
discrepancy is more easily visualized in the right panel with a so-called log ?utility-gap? plot [10],
tracking the log difference between the theoretical best valid value and those found by search.
4.2
Mixed inequality and equality constrained problems
1.0
0.8
0.6
0.4
2
0.0
1
0.2
proportion of valid and solved runs
nlopt/140
NOMAD?P1/15
NOMAD?AL?P1/15
NOMAD?AL?PBP1/15
3
4
Original AL
Slack AL
Slack AL + optim
EFI
0
best valid (1e?2 for equality) objective (f)
Next consider a problem in four input dimensions with a (known) linear objective and two constraints.
The first inequality constraint is the so-called ?Ackley? function in d = 4 input dimensions. The
second is an equality constraint following the so-called ?Hartman 4-dimensional function?. Our
supplementary material provides a full mathematical specification. Figure 2 shows two views into
0
10
20
30
40
50
0
blackbox evaluations (n)
10
20
30
40
50
blackbox evaluations (n)
Figure 2: Results on the Linear-Ackley-Hartman mixed constraint problem. The left panel shows a
progress comparison based on laGP code with initial designs of size n = 10. The x-scale has been
divided by 140 for the nlopt comparator. A value of four indicates that no valid solution has been
found. The right panel shows the proportion of valid (thin lines) and optimal (thick lines) solutions
for the EFI and ?Slack AL + optim? comparators.
progress on this problem. Since it involves mixed constraints, comparators from the BO literature
are scarce. Our EFI implementation deploys the (?h, h) heuristic mentioned in the introduction. As
representatives from the nonlinear optimization literature we include nlopt [11] and three adapted
NOMAD [13] comparators, which are detailed in our supplementary material. In the left-hand plot
we can see that our new ALBO comparators are the clear winner, with an L-BFGS-B optimized EI
search under the slack-variable AL implementation performing exceptionally well. The nlopt and
NOMAD comparators are particularly poor. We allowed those to run up to 7000 and 1000 iterations,
respectively, and in the plot we scaled the x-axis (i.e., n) to put them on the same scale as the others.
7
The right-hand plot provides a view into the distribution of two key aspects of performance over the
MC repetitions. Observe that ?Slack AL + optim? finds valid values quickly, and optimal values not
much later. Our adapted EFI is particularly slow at converging to optimal (valid) solutions.
1.0
0.8
0.6
0.4
1.0
0.5
0.0
0.0
0.2
proportion of valid and solved runs
nlopt/46
NOMAD?P1/8
NOMAD?AL?P1/8
NOMAD?AL?PBP1/8
1.5
2.0
Original AL
Slack AL
Slack AL + optim
EFI
?0.5
best valid (1e?2 for equality) objective (f)
Our final problem involves two input dimensions, an unknown objective function (i.e., one that must
be modeled with a GP), one inequality constraint and two equality constraints. The objective is
a centered and re-scaled version of the ?Goldstein?Price? function. The inequality constraint is
the sinusoidal constraint from the LSQ problem [Section 4.1]. The first equality constraint is a
centered ?Branin? function, the second equality constraint is taken from [16] (henceforth the GBSP
problem). Our supplement contains a full mathematical specification. Figure 3 shows our results on
0
50
100
150
0
blackbox evaluations (n)
50
100
150
blackbox evaluations (n)
Figure 3: Results on the GBSP problem. See Figure 2 caption.
this problem. Observe (left panel) that the original ALBO comparator makes rapid progress at first,
but dramatically slows for later iterations. The other ALBO comparators, including EFI, converge
much more reliably, with the ?Slack AL + optim? comparator leading in both stages (early progress
and ultimate convergence). Again, nlopt and NOMAD are poor, however note that their relative
comparison is reversed; again, we scaled the x-axis to view these on a similar scale as the others. The
right panel shows the proportion of valid and optimal solutions for ?Slack AL + optim? and EFI.
Notice that the AL method finds an optimal solution almost as quickly as it finds a valid one?both
substantially faster than EFI.
5
Conclusion
The augmented Lagrangian (AL) is an established apparatus from the mathematical optimization
literature, enabling objective-only or bound-constrained optimizers to be deployed in settings with
constraints. Recent work involving Bayesian optimization (BO) within the AL framework (ALBO)
has shown great promise, especially toward obtaining global solutions under constraints. However,
those methods were deficient in at least two respects. One is that only inequality constraints could
be supported. Another was that evaluating the acquisition function, combining predictive mean and
variance information via expected improvement (EI), required Monte Carlo approximation. In this
paper we showed that both drawbacks could be addressed via a slack-variable reformulation of the
AL. Our method supports inequality, equality, and mixed constraints, and to our knowledge this
updated ALBO procedure is unique in the BO literature in its applicability to the most general mixed
constraints problem (1). We showed that the slack ALBO method outperforms modern alternatives in
several challenging constrained optimization problems.
Acknowledgments
We are grateful to Mickael Binois for comments on early drafts. RBG is grateful for partial support
from National Science Foundation grant DMS-1521702. The work of SMW is supported by the U.S.
Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under
Contract No. DE-AC02-06CH11357. The work of SLD is supported by the Natural Sciences and
Engineering Research Council of Canada grant 418250.
8
References
[1] C. Audet, J. Dennis, Jr., D.W. Moore, A. Booker, and P.D. Frank. Surrogate-model-based method for
constrained optimization. In AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, 2000.
[2] D. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York,
NY, 1982.
[3] G. E. P. Box and N. R. Draper. Empirical Model Building and Response Surfaces. Wiley, Oxford, 1987.
[4] P. Boyle. Gaussian Processes for Regression and Optimization. PhD thesis, Victoria University of
Wellington, 2007.
[5] E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions,
with application to active user modeling and hierarchical reinforcement learning. Technical report,
University of British Columbia, 2010. arXiv:1012.2599v1.
[6] J. R. Gardner, M. J. Kusner, Z. Xu, K. W. Weinberger, and J. P. Cunningham. Bayesian optimization
with inequality constraints. In Proceedings of the 31st International Conference on Machine Learning,
volume 32. JMLR, W&CP, 2014.
[7] M. A. Gelbart, J. Snoek, and R. P. Adams. Bayesian optimization with unknown constraints. In Uncertainty
in Artificial Intelligence (UAI), 2014.
[8] R. B. Gramacy. laGP: Large-scale spatial modeling via local approximate Gaussian processes in R. Journal
of Statistical Software, 72(1):1?46, 2016.
[9] R.B. Gramacy, G.A. Gray, S. Le Digabel, H.K.H. Lee, P. Ranjan, G. Wells, and S.M. Wild. Modeling an
augmented Lagrangian for blackbox constrained optimization. Technometrics, 58:1?11, 2016.
[10] J.M. Hern?ndez-Lobato, M. A. Gelbart, M. W. Hoffman, R. P. Adams, and Z. Ghahramani. Predictive entropy search for Bayesian optimization with unknown constraints. In Proceedings of the 32nd International
Conference on Machine Learning, volume 37. JMLR, W&CP, 2015.
[11] S. G. Johnson. The NLopt nonlinear-optimization package, 2014. via the R package nloptr.
[12] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black box functions.
J. of Global Optimization, 13:455?492, 1998.
[13] S. Le Digabel. Algorithm 909: NOMAD: Nonlinear Optimization with the MADS algorithm. ACM
Transactions on Mathematical Software, 37(4):44:1?44:15. doi: 10.1145/1916461.1916468.
[14] J. Mockus. Bayesian Approach to Global Optimization: Theory and Applications. Springer, 1989.
[15] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, second edition, 2006.
[16] J. Parr, A. Keane, A Forrester, and C. Holden. Infill sampling criteria for surrogate-based optimization
with constraint handling. Engineering Optimization, 44:1147?1166, 2012.
[17] V. Picheny. A stepwise uncertainty reduction approach to constrained global optimization. In Proceedings
of the 7th International Conference on Artificial Intelligence and Statistics, volume 33, pages 787?795.
JMLR W&CP, 2014.
[18] V. Picheny, D. Ginsbourger, and T. Krityakierne. Comment: Some enhancements over the augmented
lagrangian approach. Technometrics, 58(1):17?21, 2016.
[19] V. Picheny, D. Ginsbourger, O. Roustant, with contributions by M. Binois, C. Chevalier, S. Marmin, and
T. Wagner. DiceOptim: Kriging-Based Optimization for Computer Experiments, 2016. R package version
2.0.
[20] M. J. Sasena. Flexibility and Efficiency Enhancement for Constrained Global Design Optimization with
Kriging Approximations. PhD thesis, University of Michigan, 2002.
[21] M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of
computer models. Lecture Notes-Monograph Series, pages 11?25, 1998.
[22] R Development Core Team. R: A language and environment for statistical computing. R Foundation for
Statistical Computing, Vienna, Aus., 2004. URL http://www.R-project.org. ISBN 3-900051-00-3.
[23] J. Snoek, H. Larochelle, and R. P. Adams. Bayesian optimization of machine learning algorithms. In
Neural Information Processing Systems (NIPS), 2012.
9
| 6439 |@word mild:1 exploitation:2 version:6 stronger:1 proportion:5 nd:1 mockus:1 open:1 termination:1 simplifying:1 accounting:1 thereby:1 solid:4 accommodate:1 reduction:1 moment:1 ndez:1 contains:2 series:1 initial:6 denoting:1 outperforms:1 freitas:1 current:3 com:1 optim:8 written:2 readily:1 must:1 mesh:1 numerical:4 plot:7 update:1 alone:1 stationary:1 fewer:3 device:1 selected:1 intelligence:2 xk:9 core:1 supplying:1 colored:1 provides:3 draft:1 revisited:1 location:3 org:1 simpler:2 mathematical:5 branin:1 symposium:1 qualitative:1 wild:2 introduce:1 snoek:2 expected:6 rapid:1 behavior:2 p1:4 blackbox:15 multi:1 chi:2 gov:1 actual:1 solver:10 increasing:1 spain:1 provided:6 bounded:1 mitigated:1 panel:6 project:1 what:2 cm:1 substantially:1 textbook:1 unified:1 finding:1 transformation:1 modernization:1 thicker:1 universit:1 rm:6 scaled:3 unit:3 grant:2 enjoy:1 superiority:1 bertsekas:1 positive:1 declare:1 engineering:2 local:9 treat:2 qualification:1 apparatus:2 limit:1 aiaa:1 oxford:1 approximately:4 black:2 plus:1 initialization:5 studied:1 au:1 challenging:2 limited:1 range:1 statistically:1 averaged:1 mads:1 practical:1 unique:1 acknowledgment:1 practice:1 definite:2 implement:1 optimizers:2 procedure:2 infill:1 empirical:4 composite:6 convenient:2 revealing:1 integrating:1 suggest:1 doubled:1 cannot:1 cal:1 put:2 context:3 applying:1 www:1 conventional:3 equivalent:3 lagrangian:8 missing:1 ranjan:1 lobato:1 straightforward:1 starting:1 independently:2 convex:1 focused:2 welch:2 qc:1 gramacy:4 schonlau:3 sld:1 boyle:1 inra:2 importantly:1 dw:5 handle:2 searching:1 variation:1 analogous:3 updated:4 target:1 enhanced:1 gm:1 suppose:1 caption:1 user:1 gps:2 us:1 element:1 expensive:4 particularly:2 updating:3 skj:1 observed:1 ackley:2 subproblem:4 solved:4 region:1 valuable:1 kriging:2 substantial:1 mentioned:1 monograph:1 broken:1 deploys:2 environment:1 sk1:1 trained:1 grateful:2 solving:4 rewrite:1 nlopt:7 predictive:7 serve:1 upon:1 usaf:1 efficiency:1 easily:1 joint:1 describe:3 monte:5 doi:1 artificial:2 choosing:1 dof:2 whose:1 heuristic:2 supplementary:10 valued:2 solve:2 relax:1 otherwise:3 toulouse:2 hartman:2 statistic:1 gp:4 noisy:1 itself:1 superscript:1 online:1 final:2 sequence:3 isbn:1 sen:1 fr:1 remainder:1 j2:1 relevant:1 combining:1 mixing:1 flexibility:2 description:1 convergence:3 double:1 optimum:1 regularity:2 spearmint:1 enhancement:2 adam:3 converges:1 depending:1 augmenting:1 progress:11 eq:2 strong:1 solves:2 involves:8 larochelle:1 thick:1 drawback:2 germane:1 exploration:2 centered:2 violates:1 material:10 require:1 decompose:1 extension:1 strictly:1 hold:1 considered:1 wright:1 normal:2 great:2 scope:1 parr:1 optimizer:1 vary:1 smallest:1 omitted:1 early:2 iw:2 cole:1 council:1 largest:1 repetition:1 successfully:1 weighted:3 hoffman:1 stefan:1 cora:1 gaussian:11 aim:1 rather:3 office:2 improvement:8 methodological:1 check:2 indicates:1 tech:1 hk:4 contrast:1 sense:1 typically:2 holden:1 cunningham:1 borrowing:1 transformed:1 france:1 subroutine:2 booker:1 issue:1 flexible:1 acutely:1 socalled:1 development:1 constrained:20 special:1 blacksburg:1 integration:2 cube:1 spatial:1 having:1 sampling:2 identical:1 jones:2 comparators:11 filling:1 nearly:1 thin:1 discrepancy:1 report:2 recommend:1 others:2 few:2 primarily:2 modern:2 pathological:1 simultaneously:2 national:2 ima:1 ycj:9 technometrics:2 freedom:2 montr:2 message:1 evaluation:14 deferred:1 violation:2 mixture:1 accurate:1 chevalier:1 integral:3 partial:1 polymtl:1 necessary:1 decoupled:1 tree:1 loosely:1 re:1 theoretical:1 uncertain:1 fitted:1 hip:1 earlier:1 modeling:5 s2j:1 ordinary:1 applicability:1 introducing:3 deviation:2 cost:1 johnson:1 virginia:1 too:1 reported:1 referring:1 st:1 density:2 international:3 digabel:4 sensitivity:1 accessible:1 contract:1 lee:1 analogously:1 quickly:2 yg:2 again:4 central:2 thesis:2 opposed:1 choose:3 containing:1 henceforth:2 derivative:1 leading:1 stark:1 toy:1 sinusoidal:2 de:4 bfgs:4 smw:1 gaussianity:1 satisfy:1 idealized:1 depends:1 later:3 view:3 closed:6 red:1 start:1 relied:1 option:1 capability:1 contribution:2 il:1 square:2 opensource:1 variance:2 reinforced:1 yield:1 bayesian:11 dub:1 mc:1 carlo:5 multiplying:1 drive:1 explain:1 deploying:2 definition:1 streamlined:1 g1n:1 acquisition:6 energy:1 involved:1 thereof:1 dm:1 naturally:2 associated:1 boil:1 emits:1 recall:1 knowledge:1 improves:1 organized:1 cj:12 routine:2 brochu:1 goldstein:1 back:1 nasa:1 manuscript:1 focusing:1 higher:2 dt:3 methodology:1 response:3 pesc:4 nomad:10 formulation:6 evaluated:1 done:1 box:2 keane:1 furthermore:1 stage:1 until:1 working:1 cran:1 hand:3 dennis:1 ei:28 nonlinear:3 c2j:2 defines:1 yf:12 multidisciplinary:1 indicated:1 gray:4 scientific:1 building:1 facilitate:2 effect:2 contain:1 usa:2 concept:1 multiplier:2 validity:2 equality:26 former:1 laboratory:1 moore:1 conditionally:2 during:1 criterion:2 pdf:1 outline:1 theoretic:1 polytechnique:1 gelbart:2 cp:3 meaning:1 novel:1 recently:4 common:1 dubious:1 winner:1 volume:3 extend:1 rd:4 unconstrained:6 approx:1 pm:1 automatic:1 grid:1 pathology:1 language:1 specification:2 similarity:1 surface:4 gj:7 deduce:1 recent:3 showed:2 optimizing:1 certain:3 nonconvex:1 meta:1 inequality:25 outperforming:1 vt:1 yi:2 victor:2 seen:1 minimum:2 somewhat:2 converting:2 converge:2 maximize:1 wellington:1 dashed:2 rv:1 multiple:1 full:3 smooth:1 technical:1 match:1 faster:1 calculation:3 offer:1 long:1 academic:1 divided:1 va:1 impact:1 prediction:1 involving:5 basic:1 converging:1 regression:1 expectation:1 metric:2 arxiv:1 iteration:10 represent:1 sometimes:1 c1:2 whereas:3 addressed:1 else:2 source:1 extra:1 comment:2 deficient:1 facilitates:1 leveraging:2 spirit:1 call:1 near:1 presence:2 insofar:1 easy:1 xj:2 fit:2 variate:4 opposite:1 marmin:1 ac02:1 judiciously:1 intensive:2 deleterious:1 expression:1 utility:3 ultimate:1 url:1 penalty:3 reformulated:1 york:1 remark:1 dramatically:2 useful:1 detailed:1 involve:2 cleaner:1 clear:1 nonparametric:1 locally:1 welcome:1 diceoptim:3 visualized:1 http:2 outperform:1 exist:1 canonical:1 tutorial:1 shifted:1 revisit:1 notice:1 estimated:1 track:1 bulk:1 blue:1 write:1 promise:2 dropping:1 key:1 indefinite:1 reformulation:2 threshold:4 nevertheless:1 four:3 audet:1 draper:1 v1:1 nocedal:1 convert:2 sum:3 run:7 package:6 uncertainty:2 almost:1 home:1 decision:1 bound:7 rbg:2 guaranteed:1 quadratic:3 adapted:3 constraint:66 software:3 fmin:7 aspect:1 min:11 performing:1 expanded:1 relatively:1 format:1 department:1 sasena:1 alternate:1 poor:2 jr:1 smaller:2 across:1 kusner:1 s1:1 taken:2 computationally:3 equation:2 remains:2 hern:1 slack:49 discus:2 describing:1 know:1 serf:1 end:1 available:3 efi:10 victoria:1 observe:5 hierarchical:1 centrality:1 alternative:8 weinberger:1 rp:1 original:15 include:1 completed:1 vienna:1 calculating:1 k1:1 quantile:1 especially:2 ghahramani:1 classical:1 unchanged:1 objective:17 added:1 quantity:7 realized:2 primary:1 dependence:1 usual:1 ygj:1 surrogate:13 traditional:1 exhibit:1 minx:1 reversed:1 accommodating:1 toward:2 code:4 modeled:1 mini:1 providing:2 balance:2 minimizing:2 optionally:1 setup:4 difficult:2 robert:1 statement:1 frank:1 favorably:1 expense:1 subproblems:4 negative:2 slows:1 forrester:1 design:9 implementation:7 reliably:1 unknown:5 sebastien:1 allowing:1 vertical:1 revised:1 sm:2 benchmark:1 finite:1 enabling:1 supporting:1 defining:1 extended:2 variability:1 team:1 frame:1 canada:2 pair:1 required:3 specified:1 hyperrectangle:1 componentwise:1 optimized:1 distinction:1 established:1 barcelona:1 nip:2 able:1 suggested:1 usually:3 below:2 yc:2 challenge:1 max:9 including:1 satisfaction:1 natural:1 scarce:1 advanced:1 scheme:4 github:1 library:4 finished:1 axis:2 started:1 concludes:1 ymin:4 deemed:2 gardner:1 columbia:1 kj:1 review:2 literature:9 understanding:1 python:1 relative:1 yc1:1 roustant:1 lecture:1 mixed:18 interesting:1 ically:1 versus:1 foundation:2 degree:2 repeat:1 supported:3 keeping:1 free:2 allow:1 wmin:8 emerge:1 wagner:1 tolerance:1 dimension:6 default:2 valid:21 evaluating:3 author:2 made:1 reinforcement:1 ginsbourger:2 far:1 emitting:1 transaction:1 picheny:5 sj:17 approximate:2 absorb:1 global:11 active:1 reveals:1 uai:1 anl:1 xi:3 search:15 iterative:1 wsnc:3 sk:3 additionally:1 lsq:3 nature:1 terminate:1 ca:1 obtaining:1 alg:3 expansion:1 artificially:1 edition:1 repeated:1 allowed:1 convey:1 quadrature:3 xu:1 augmented:8 biggest:1 representative:1 deployed:2 slow:1 ny:1 wiley:1 sub:3 natively:1 nonnegativity:1 xh:1 candidate:2 jmlr:3 learns:1 down:1 enlarging:1 british:1 bastien:1 stepwise:1 sequential:2 adding:1 supplement:3 phd:2 budget:4 gap:3 easier:1 entropy:1 michigan:1 lagrange:2 tracking:1 bo:17 scalar:1 springer:2 ch:3 satisfies:1 extracted:1 cdf:1 acm:1 comparator:4 price:1 feasible:1 hard:1 exceptionally:1 typical:1 except:2 reducing:1 averaging:1 conservative:1 called:6 la:7 argonne:2 support:2 latter:2 handling:3 |
6,013 | 644 | ?
?
?
"! #
$ "% & ' ( !
?
?
?
?
?
<
)
B
@
D
A
F
C
H
E
G
)
<
:
?
=
;
>
8
*
4
7
9
)
,
*
.
+
0
2
/
4
1
6
3
5
IKJLJNM0OMQPSR? TV`,?{UX? WYW J[gZ]b \_? J l?^a`,ZmbNcN? d?r?cNe ??J c?W ?.v?f Zhw gKikjml bonqp tr s cLuYvxw P R ? w ` gon y{? z.z?|_? }n c lV~[` ?mb??L? nX??r?eh?N? cLwQv?? Zhw ? i?j?l bL?{???
?r ?? cLc ? w?v? w bq? ?Lr??`u
?D?Y?? ? R ?hw? ` ? ?
? ?,e ??Z cN? cLuQ? v? w ? ?Q?N??? ?
IKJo?hJN? M ?hO ??M
?h?8?Y? I w
` ??JYw
?X??? R ?h?Vw ` ?]wQ? e?,Zwtc ??c?w??.v ?<?m???D??w??
?? n z???? z0>?? ??? ?D??? ????w?? ?N??3???? ? > ?D? ? ? j
? ix? ? ??`Rx? ZDz U?? ?4??Zh? `8z ? ???Z? wQ?
?V? ?4?L??? ?
??? `?? ?L?h? yX? wY? n \ ??? b n ???z?} bQ??? Z
?
?6???]?h?,???8?
?c?? z z e,cL???,w????`,? z z?c ??v d ? Z Mg?? w \v?n JSmj?g ?,wD???\ Nwn ??g ? l v ?z Z
? d?? ` j c
?. b?Z,? ?k?z?n? v?g l??^ ???` ??c j ?]n? b w ` ? ? ?
g g ? ? ?v ?8z?YcNj
? W0P ?mbScNd{d` w ? gNcNl zP c j W?! U?z ? c? Zh?\V ??`m? d z c ?Ll v Z,Zm? g bo??n ` #? ? "%{? $!&? n w ` (} _' l)+* 2-? z0,g ? Z?z ? ? g n ZQ` ` ? gNl u ? ?tz ? e ` ?m? Z ? ` g ?b?` ?, ?
. j W/ z z ?]?N??n \,?0 c?e ` 2? 143 ? ? `m` z65 ? | ? ` ?w \ ? Z 87ql Z?` ?9 z ?:<;
c!\ = WLZ
c `?Ox>AZ @?QPmb?zCBDe d4\?? ? z ? wg ?c ? Z ? ? D emZ
`,??? ? SZ ?{wR y Fc E 8??tT Hz c g GJE?? I
U Z c ? ? ?
? j?} L? VXWK j j u.z ? w i y M? ? NR ? {c z j ? Y go` ??{b ? j2u z?mO yXz
c` Z Wbd? ?]cW?\ e c ?ql ? l c Z?,Z
g ?f j6? / ?hz j g ? ` ic`,z cS? v z ZQg???[w I Z Z]W?c }m}]? U?\ ` cNZ!n2j ^ 3?Z c #l kw l` ?(j z O j,z F? a?v ?e ? mo? n j K`b Um_ q? pqy e]r a
?
goZD\`mz ?Sv Z g Pq Y w l
s ? M c ?
? W z we ? z t?? ? c ?.Z gy gSW.hz ? u gS\ ` ????? ?mw \?{? ? ?08?x ?b l c g d? ? e,`m? `,?XzYZ ?L` cNb W ? ` vP ,? cSz.v ? : Z g ? c ?y
? ?? |
z
{S| ?~}-???8
?
? Z ? ?,? ? j ?F?2?
v
? n ? j n?? v z.y{yH??Z ? ? 6w ?
v
?Xc j? n ` ? l %
?
mz e g w yH\mz ? v Z ? J
?
? Z,?
l ??W b J e ~
? ? z? ? W J d ???q?w ` ? ,g w ?Lc d ? M ? Z n ` c bZm?A? 3 z v? M ?21 j Z e phj c j ? \ e ?t? ?Yg?Z??? ~Z ? n ?m? ???w6?
????
"!#%$'&)(*,+-/.10243 5 6798':;,<4='>?(A@B& 6/CBDE3 FC (HG%IJK.L M
6 !]\_^V`# Y<bacQC ; 6 Ced ! #43 . T afY . 0 N 6/ . ;<^/6/g N 6^%h .iAj
sS? ( & 0 (M/!#( # 3C u%&?3 C?( C ( 3?S;S? 3??Y 6 I T 7 : ;S? 3{? < $.B&?6/C
3
?
.
`
?
\'?Q?F6 C T? 0 . r `? (W? \
p : ; .BC 3 C > ?V? 0 . p ; : ^ .B? T
\ ? ? (Q.?(? ? ; ! .1? ?"?
6 a ( . & 6 C q []? <
?,?
# J [?? s p $ ` !? 6? &?? C U
! # (ONQP 2 3R;S! >?T 7 . #VUW( . & T CBXZY,[
8kmlFno8pqrts 3 <4uwv 3h .B(Cyx{zz%|{}~
? 0 \ r > 2 C 6{? ( 0B;,?W24? (tCQ?
2 $ u ; ? . # $ u ( ? (A?6??V?WU ?. T a?:? I4? > I ( ?F& T C q 0]?0 .?? ? ! Mb? I? (QC]? I
T 7 (?43{? 2 ? ? M !B# (W?VC?6 ? ? ( \ T 7?? ?t3{? Y I/? : ? ? C > & (tY ? # . [_???]?b?
r ?? 3 I ! x z??%?B??? 6?9(Q? UQC M? # (_??? . ? T I T a ! ?%( ? U3{??3 ? ; ? ; . > T 7 ? #VY [
u (AC]? \ (???B? ?
?t???
?? u ? .? :V^ . ? T V[?? [ 0F. ?S? ? T ? ( b? ? GV? !1?]( ? Y I
2 6 C . ( ? : > ? $ ?? .B? # ??xtzz x ??? 6/? [ ?S + ? ( ? rt? 6??. > ? ^ I4? ;6???
?? !#V? 02 32 U?W& ( ?S ? ? !A? + 3 .?(M?? !?#;,? ! # (??f????\ T ? ( ?]? ?9? ? 3 ! x{z???M?? ? ^ \ (Q?w????????
? ? 0 . ? ? !Y T ?~
x z? z } M ! # ( ?? ? 3 ? ? ? ? . > 6/??. & 6 s .F$ C ?y!? + ? 4? ?/? ?% u? C?? # (W^ ;?? T?
%? + ? $ :?
> ? ( C ?.
60& s . # ? : ; ! ? ? > !?# C?(t0?# 6 ? ?%[_3t ? } V; 6/[ T a": Y <43 ?
({C ? 2 .BC 6/ 0 M ? 3 "!f6 ?: s 43 C > ( C?NQU 2 ! C$#&% & # ( ? $ ({3`?# , ? ^%. ? ?S!oY 0o` 6 I ( ? ( u
? ? ? Y T ? ? C?UWC?( ? 3 .BL2 d 3 N ? ? ( u
! 6 T (y3{ u T ??> 6 (y?4( C ? ? . CF6 < ?(' ?) x+*,.-/ #10
C
6
$
z
+
@
&
A
B
C
(
3
?
3
^
U
t
3
Z
?
0
? ? <+ . #1D
E_<GF ;S6 % T I [ ? $IHJ
6
?
C
(
&
6
7
?
=
;
?
<
9
>
x
`
u
6
5
9
?
8
:
v
4
u ? 0 .?C ; ^ .B?,6 0_? ? C $LK . ? > 8 ? $C > 8 N !M ?$'C?( K?? N # 3 C ( 3 ? ? 3 + 3IN ? TO u ???+P ? ( ?RQ z @ zTS
?=U ; 3 !O3 4
? V +? W ^ . # x zz x } ? / # (YX ? ?W.C ; ` . ; 6?\ r >[Z \ . 6 ? $ 3 KH?A$C?( T $^] ? !
? _ 0 ? 6 b! ac _ p 0 . Mf^%h u .( dy? 3 C?:? .?C r ? u f? e C J : ^ . ; 6 < M ?STg ^ ?f8 3C?( <h . ( 8ji (tC d k ( ? ?
`
? r9s xtz @+A ? at T uu ? ? $ 4wv ? \ ? !Y T I
u n ^ < ? L( o 8k : v (q3 pV0?? ;
lm r ! m (Q?,C9^ ? ? ?O. C?; ` .B( Y
x 0?^2E3 2 p 6 : {? z}| 6 ? ? .?0?6&
: p;y
~ &
u ? .?L
C ?
\ ? V; ??. m ( ?%U . & 6/?C ???L? ??G??? ?? ???L? &?#; `$?EYS[W3
?
? ? + . ? ? ; ? ; $. ? ? ? ?
`??
# ? ?? ??? 1
?
^ ? O
\ 3+? C ({0?^? . 0 8 ??? 2 6 ? >
(I? ? 2 . C6 0 3< 2 ^ V; 6/%0 T ? 4
C U ??^?? .?0 0?^y?
(
? r \ 2 ? ( q0 ? /?? 0
? ? p ! 6 ? (I3 ? . # ? E ? ? > ? I < ( `
? ? 0 ? C?;?: ? . ; 6 ?96 a4( ? r ? 26? (A0? n
*?0 ? ? ? <?; . #;S0 3Q:V0B.?C r ?Y?
?
6 ? s 3 ? . ;?\ $ 3 ? +T C ; ? #y?
. # r !? ? ? ? i 3L?
5 :; ? 8 C > ? $C?? ? @ C 6 ?? ^ ?6? C ? # U ^; ? T C \ u
L( ? @ .?? r . 4?? 2 UIp ? ? @ d T <Y?%U .?? T p?? ?Or >?? (??
. ? ? < LQ. &?6 ? q 0?? ? 6%(?C?? ? ` . 0 ? s 0 8L? . ?A? . ;6?
?`Lr ? 6 a?? m ( ? sf? ? .$? ? ? 3?"M6b? ? > 3 0BD ? . ` #
0 ; + ? ( ? *, 3 ?
? 0 . ? :^/! * 6 I T ?
T ? ( &#f3 ! $I?
. 6?? ?? f3 : ? (
T a4.B# $ ? ? 6 T 7S[
???G?????{?????? ???
? ? ? C ` ( 2 . C 6j?W6 I ?? ? 0e0 2 ( ` ;??$ u
( ??? ? ( 6 !?(?! # (?? !??? M xb?? ?
: ? 3_? $ ` d Co6 G
? ? ?$
U ^ ( ? ? ?.? + ? !0??? ? < ? 3]0 s x ? ( C ? 3 ? ? ? ? ? l # C (t0?#%6 ? ????? c /6 C?w? (? ??? M ?
? M Sf? ? M ? ? } ??
( ? 3t? (~?
?
?
J
? u
? ?
? ? ? ? ? ?
?Q
}
? ? ? ?
,
? 2 ( ?
( 2 . ? 6/ ? 0 0 r ? ? ? T :f('26 0
. ?t( M ? ? ? ? ? (? C
' ? x u ~ ? M ? V"(Z3 ? ( ? I ! $ ? ?y!B( u
? . # $ `
&?# $C (1. # (_&9( + # . 0]8 CF$W: I 3 ? >"!$#,^&%
x } V ( v ? \ $ M*) ?+ # T ? ! ?-,
T ?
+ (A (tC 3+?? . > ? & ~ ?/. 6*0 + ?S? M . #b3132W;0e; ? ( + (tC >
? ? ^ ? ` ! s 6 4 ; 090 r * u !B6 ? ( 3 ? ; 6 h 6 ? 2 (C ` ( ?.BC 6 0 _ ? Y ! `r < :$ & ?9.?. $ 8 r]u ; 0 ? ^ I `?! ? 6 I
6V??2 ( C ? ? .BC 640 5 ? ? !#10
? ? C ` ( ? . ? 6%[ ? ? T ? 0?#48 C?( 3<> ? r76 3:98;: u 4 * 0 0 < ? uw! T :f(O3
^ *,6 6 ? 4y? 2>= C ` ( 2 .
6 40]??? +*? | ? M r <Vu & U& ? .?$
(? M
?
4 ? 9? @(A(B>CY? @ B>CED? ?FCY*
?
? G-HJI Q LK ?
N
VZ( 0B? # ? ? ? ?^? ? \ (M ? ~ # ~ 6 ? M .? 3 .3O ; 0?(? ? C ?FPRQ & ; . # ??# U\S ; \ ? ? ? T P ; : ? ( I ? \ : $ C
6 ?*T @ I 0 M>U ? ? ? @ # (WV ; V; ? ? \ ?>X 0?Y ? ? $ < ^ZY ? (QC T ? T < 5 ? ? C6 & ( s + m .B0Is
? ? ? ? x
? ??
?
? ? []\^7_a`-b7c _d
* ????? ?????
?
? ? ? ?
c]egfhb ? egii ekjmlonp c(q _srt lonk_um_vxw f7y>z(q _|{~} f vE?????
?
c/? _ ? c ?
i }g?????? p ~
! #"$%&(' "*)+ ,-.) / 01 ,2'
6879:7
;=<?>.@ ;
ABADCFE AGA
H
?
U
QQRQ
V
I.I=I
J=KMLON ; LOP >
SQQ
T
TT
W
X >Y 7;=< >@BZ
QQQ
[]\^`_acb
degfihjlkcmonprqs:t.qu j$vwx s q2y _z t h \{}| n~.t=??k(??k=???F?y a _??q?? u _ v .{ ??? v hq h??n??
q2n
????? { ?=? h ? {=? h ?O????{={ q ? u a?? u v k {.? n { ? v ?=?Op v {.? y ?_ ? a? u _ v??? ?F??}? a { ? { ? ?h ????? ?
v ? h?? ?.u ~ { ??????? y ? h????2tc?8??? {F? ?Okt??
? q ? q { v u q??p? ???? u ? q ? _ k
? n ?}? ?O? up v?.n ? a2?a tu ??? ??? p ?? ? ? ? \ u ? q?? \ _ \ ??n2?}?h?? ? ?n h q
?=? ? v t`???.? ? q??`? { \ h ? ?? ?.??q???*??? y _ v ? n(? ?D?O\ h ??? ??q ? n?
q { j ? u ??_ ? ? q ?h? h? ?
? { ?o??v _ ? ? ? ? u _?? ?=? h \ v {?? v t???? ?c? ?????????? b
? ? jlq ?? { k?h?q?p?? ???? b???=? ?? ?=? h? q
??{ ? ? h? v tF? ? ? _??Fn ? \ ?? hp:h?Mn?h ? ????? ? q t h?? q ? n ? u ? n?h? ? ? ? ?
??? q nv s ? u ? ? j \ ?? ? ? k ?
q ? kO?Dk j \?{ ? w v ? n ?=\
O\\ ? ? n u \ u \ ?
? ?
? "! v ? n ?$# \ n ? q&%(' ? ?*){ khq ? ?,+.-./ ? ' ?10 \ ? ?3254 ? 6 ?87
9;: -
? ? 1? < ? ? ?R???? b ? ? ? ?? b ?.= ? ? - ? b}? % ? ? ? ?< ?
{ h ? D? h?? ? q2m p ? u
}q \ {(? ? q {A@ v B n ? n_ ? n?.m ? ? ?DC v ? ?E \ ? q ? { 2 ??u?? ? q ? \ ? \ u ? j?q \ | ?h ? ? y k ? ? ? ?GF?
?IH q | ?n h ? ? q <J
KL 2MNO PRQTS U N v B ? ? V8W ? ? X ?8?ZY[T\ ? u ? % ' ? ? )?{ v uq ?^] ? ? ' ? ?TY + ? ? ? ? ? q;_ ? ? ?8`
? ?f b(?B? ' ? YRg ?h b ikj ? d ? ? *b l % ? ?h b ?
]ba % ' ? ? Ym ? ? ? ced
? n ? ? ? b ? % Y + ? b i = c ? ?? ? bi
o
j ?=q?_?q ? ' % YRp ? b ? u r
? q ? v _ %s' ? ? Y [ ?h ?ut K ?=\ ? ? n t u \ u p ? q y qt q m n_ | q
p v ? j?? q h?? ?2_
v s { v h*h?=q u j w
v v _ \ n ? ? ?gq \ t ? n ? ? y q _?? q y u _?k .{ x
? { j
? u ? ;k y
v j ? ? j a n ? k
?.uu ? 2zk{| ? q } { ~ ? ?
| ? v 3? 2 ? 1? ? ?
\ ? x { h ;b ?;???? ? m? ? a _ ,??$?R?
?;?u?? i??k?&2 _q u
? ? a u ? v ? v ? $
v p \ ? u v ? s n?j ?
n ? ? 3? ? ? k ? n8? q ?e? ? ? \ ? ?q ?n??Mq?m q ?n????k s ? ?D? |
h v h
q ? { 3? ? k j ?
? ? n _ | q h ?D~ { ?h ??v { - n ? ? h? q2? ? ? { ? ?
? ? | vv ?3??? n y=y _k;? ??? ? h \ k {?? k ?
? ? K ?=q ??_ _k _ k ? u ? ? ? p ? v u ?.? ? ? ? ? ? u ? v ? ? ? j \ u ??_? ? ? ? ? ? h
q n_ | ?
h ?k? ??? ? ? ? ?.q ?
h v ? ? ? ?? ??? ? ? ? ? ?? ? a?? <? ? ? ? ? ? ? j ? ?;?a ? \ ? ? ? ? u _?? ?=? u q ????? _ ?.\ { | k ? ??? {
? ?? ? Tv ? ?Dh? ? ? 2 n_ {
? ? v ? ? ? n? y ? q ? n u n ? | q?h ? X? ??? ? { | n2?g?.p ? v h
? \ ? ? G? ? ? ? { ? ? s
u
2 ? ? ? w \ ?=T? ?Dv { ? v ? ? ? ? ? ??? v _ q ? q _p ? ? ?8? ? ? n { p ?
? ?r? ? ? ? b ? u ? ?8? | v s???u ? ?
s? ? ? { h ? ? 2 ? v X p { v ? \ n ? ? { ? : 1? ??? ? ? n { ? k ?.u ? T? ? n ? ? p ?=k u ? q ? ~ ? ? ?
? ? ? ? ??? h???n h
>
?b?=?e? ?? ?
i??
??
?
?
34(5
!
"
#%$
NPO=Q
/I);J >LKM
&(')*',+.-0/12435&769842:);=<>@?4AB'DCFEHG
R
SUTWV=TYX[Z\V^]IX(_a`cbedf=ghSi:jlk Smon7pWqrSsYt
? ? ? ? ? w ?B??c?c?? ~o???7?
u\vwyx z {}|~????Y? ?9w ? ~yw ???? ~w?? {???y????????????
? ???? v%?? ? ?r?
? ? ? ~ ?? ? ? ? ? w ?}v ~? ??
vU?Y???r????u\v wD? z ??|?vl? ?
? ? ?? w???7v9? ? ??W? { ??? ? ?
| ??
?U?L??? ???? ??
? ?4???r?r??P?? ?~?????? ??? ? ?e?o? ?? ? ??? | ? ???? ? ? ? ?B? ?%? w v ??U??
????? ? |??o? ??? ? ? ? v? ?? ? ?owl? ?U???7??? ? ??? ??? ? ? ??? ? ? ? ? ? v ? ????? ? |?? w
??Y? ? ? v ?? ????? ??? ? ? ~?l?L?7~ ? ???v ? ?v w ? ? ? ? ? ? ???l?
??
v??v ? ?? ? ? ? ? ? ? ? ? ???? ? ? ? ? ? w ? ? !#" ? v? ? ?~ ?%$ &&('
+ ,.- ? ? |e? ? ? ? ??? ~e? ?? ? ? ?0/ ? ? ? w1 ? ?2 ?43 ? ??~?w ?r? ? ~? 5 ? v ? ? ? ? ? ?
? ; ???7v ? ? ? ?*< ?= v>? ??? ? ?A@
w ?}? ? ? ? :? 9
? ? ?ED ? ? ??l?GF ? / ? ? ??? ? ? v ? ? ? ? ? I
6 H ? ? ? vKJ ?ML ? ?ON
??? ?? ? ? w Y??v?v ? ? ?? ? ? ?[Z??r?}?]\?v w i ?
? v? v ^ ?
_
B ? ?C |
`
f ! aa
lnm v ?
x
?
h|~}??
?z
? y k{ ? ? f !
}?
C?? y??! ? ? f !
? ? ? | ?0????? v ??? ?}?
? ? ? ??
w ? v ?? 7
?%? ? v
?
??? T ??? ~ s ? { ?
| v ? ? , ??? ?? ? P ? w ?Y? w ? ?
'
? ?
? v ?? ??? ? ?? ? ?v ?
| ? ? D
? ? ??:
? ? ?*??/
? ? ?
? ??? ? ? ? ? ? ?
G
? 9 ?
? v ? ]
? ? ? { ? ? ? { w ? C? ?
???I?
?
?
?0?
? a??? ? ??? ?
???*???
?????
? ?
?Q?
'
?
?
}? ?
k:?K
?
??? ?
'
? ? v
?
)(+*
?
?
?
&%
?
?
x
?
?
? ?
?
?
?
?
? ?
'
?
???
?
+
?
?
?
'
}
k
$
'
??
'
'
? ' ?
?
? ' ?
?"!
?:~ ?? ??? v? ? ??? {
v ??? ? wr ?
? ?? ??? ?
? ?e? r
p ?/? ? ? w ? ?
~ @ ?? ?3 ?
?? wED}? w C ?F ? ? ? ? ? ?
? ? ?%r ? ? ??H
v V4v 9 w
? ? ?
#
?dc*e
? ? ?
?
?
???
?
? v w ?(v ? ?}|WV(~X
?
??? ?
?
?
? ? ? v ? ?r?B? 6 ? ? w ? v ??????M? ? ? ? w ? ? rv ?B? ?*? ?
? ? ? ???? a&
??? ????? ? ? v9w??Y? ????E?#? aa ? ? i???
? '
? ? ?? ? ? ? ?
? ? v? ? /?? ? ?B?| ,
? ?r?Y? 6 H ?? ? ??7
? ? ?U? 8 ? ? {?
? I? ? ? ? ? ? ??? ? | ?O? ~ ?7? ? ?| =? ? ? ? ? w ? ? ? H ??? ? ? ??
?
?
w8 ~
?
????????????0?? ?? ????.?
? ? ? ? ? ???????S? ????? ? ???? ?
?
? 6?? v ?7
$ !baa ?
J? g$h aaji ?
k
v??oqp ? ? ? ? ? ?}v%?,?r
??Bv ?ts
?
?*)
??
!
? '
uQv i#w aa
? ? ? ? ? ? ? ? ?QPSR ? ?UT
? ?? ? ? ~%?a??,?
w ? ?
?
?]\;^??Q?`_ ?2a
? ? `
? y z
-, /.
? ? ? ? ??
? ? ? w ? ? v ? ?7v?
? ??}vU? ~ ? ? ?
? ??v w ?}v W
? vr ? ?
02143
HG
?
Mcbed
?
Ngf 6h
?
w ? ? w?w v ? ? H v? v ? ? ?
?U? v e
? ?
? w Y=?~
? w ?? ? ? ? ? l ?
? v ? ? ? ? ? ? < ? ?
? v e| ? ? ? ? ~ ? ? U~ ?? w 8 ? ?
~ ?{?? ? v
? w? ~?w ? ? sc?
? ??
c? ???
wlM
? = ? ? ?=? ?
?
65
&7
98;:
BI
"J
LKMON
RQ
/P
?
|
S
UT
XWZYE[
? ? ?
?nm v ? aa
?ji&k>?
_l
m
? ? &a
{
? ? ? !?
>=
+<
&C
BA
}
?
'??
?
?
?
?
?
aa
$?? h aa
~ ?
!
<
!
?
?
' ?
?
? ?
poqsr
?*?ut
?
wv
o
v
"! $#%&!'
(*)"+-,
.0/
(*1324) %
8-9
:&:&;<>=@?BA4CEDGFIHJLKM&NOQPRSUTWVYX3Z\[^] H_I`'acbeddEfgihkj4l Xm^nXo^p `qrXEsmutv^wyx P7z'{ mi| X fi}~??\??
? m??? b\? F { n `?`? ? a`? t?|\h?? ` J h??\g ` h H m?? ?? m h? HB??????
?E?&? x?? z N ? x??^?????N? P? ?? N P4???W? ? ? x ????N? P? ? ? N?? z
N ?? ?^?"? ? N? PB??? ? N P ??? ? ~ ?? ? z N?? ? ? ? N?? ?
?Q? ? ???
? ?^? ? ??? ? ? ? ? ? ? ??? NO?P4?
??? ?
????????
?
? ??? P ?
? P
~?? ?
N ?@? ? ? ?? ?? ? ?"? ? ? x ? NO ? ?
?
?
????
??
???@? ]I?????@???k???? ? P `?uX?"? be? J XE?i? x ? ?? n?
? ? ? HBJY? } ~ ?? x?? ww ? ? t ?
?
?
?????
ae
X
X4? J g a
p 4 7H 698 ? n ? { m: m ?
X
X " b X q ? %? $B? a jl 'g &)( q X J X
*,+ ? `? h " ? .? -0/1 3h 2>5
m
? !? X J ? ? m@#
? D ? a H X ? `F? E { `H
X G!Ie)
p J X m H J E { hLK3XM
J > ?N ? P A
? < ? q b@? ? X j d ?
S @ b X ? Cp B a ? ` p =
?; aH?P =
{O ? bP m,Q ( !R\? H J hFSB??? ?UTWV R J & b H T |YXZ
G b ?3?b H7N
?
`.? q
??
[?
? \]\]^=_`.abid
? caeagfhji bk3lUagf anmo!p f.q
rYsut?v w
xHy?z sWh|{ }j~ k
a9?Y?
? z?? xhU? ?
?
?
a
?
x
,
s
?
?
?
?
?
C
?
h
?
?
k
?
k
?
? x a?? h ?!? ? w
? ? lja??a?!o9p ? ? ? hs h c? y k s a9??? m h? ? c????9?c ? ?,q s?
? ?
? ??
?
P
?
?
P
?
P
N
?
?
?
?
?
?
?
?
N
?
?
?
?
?
?
?
?
?
?
?
| ? ?\X>m n ?@?j? ????" ? X J X m ?? ? { ?? ?
?H ( ?3? q d "h%? ? a X ??? ? NO P? ?? N P4? ? HJ ? N O P ( ? (C??F? ?U? ?
X ? { J { ? ` ??
? ? 9? ? qrX ?? ? 5
? ?? xPY?
X ? a y ? q mH?kHBJ q I@? n ?
? K X | a b 9J ? H ] ? ` ?^Y
? ? ? N 7P ? ??? N P ?
o UX ? ?{ ?
a H ? { j t ? ? ? ? ? ? ???? ?|? ? j h ?H ???
( ? g n? m X Z h ??? { m ? h ? { T q ? ? ??
O
?
F
B
H
J
N
O
?
?
N
y
z
N
m
}
? ??? N ?P ? ?? ?? O ?
< ? ?j X m hY
m ? ? ? ? ?
? ? ? X n ? ? 4 a?
X ? j? ? 6 h " b 8 ? ? gJ ? p U? ? hkm ? ? / ? ? ] X ? h m { ? ? [ d ?J ? d ? ` J H ? ??h ?
???W? 9?\ ?
??? ? ?n?
? ? ? ? b ? ? ? s,?
?
x a on??? ? h ? c k3???
? an??a? b rY?
k ??W????? a }??
} c ~ p ??s ??an?0? x a ?s k ?ChU?
8-9 :??W??< =@??EC D ? ? ? { 8?[ T ? H@F ? X x?
N m ? ? _ ?L??
hkX ? aLa ? d g J X `? ? a'a
? t| _@XX a
o N h ` _?X ? ? X V ? ? ? Gd3| ` ?!#"%$@? ?& ? ?')(? * ? ? X m a,+-X /. %0 H21 x43%5 { ? X J * ? ? ? |76 g o ?829:8 z ?
o ?
?` ? ? ? ? { m ?<; H@F ` $@d { ? ? H>= h V ?E ? J t H ?Bj h H ?@?BA F?qrX J XCn ? ? `
X + ? ( ? ? ?uN P z ?ED ? { |
? FHG N >? I ?@J KML @ b Xm H m@X ?N H m T ? `HPO-aBQ!NR J KTS | ? [ ? ?VU X?EPW ? ? { m^|YX `H ? a X ? ?
!
J ? X3nn"\[ j X T ?>]_^ ?m {` ( ?@X { ? ? HBJ h " b E J ? ? ? hIm a_ ??X ? ? T m@? ? ? { U h?m ? ( ? ? ? |bac)d
Z ? ? ?
9jihklnmoqp<ihrs :ut : ?_vxw lsmzyu{ ? 8B| 9
}2~ ? 9
:u??
? ? aP?
?
? E X m Hq `
? { aa?@X ` { ?? t a F%* ? n ` h H ???#? { g m H m H\F???? o@h m { J [ t J n
t + anJ H ? <
v ? x?? %? ?7? H` d `? { aqrXu| H m\Ha?? g ? X?a ? { " a?\t { J ? ? `X4n G g J?X ? n H m ? X ` j>? a ^z??k<
?
?m X u
`? | X3a t? ? m?X
? m H>? ? ? m { ???WmncX2?#? ` b X J ( h ` h ? `?@X ` N ? ? ?,? $ p ? X ? ? h?? { N ? H>? 1 ? ?
?
J
H
/
?
/
n
??? ? m a? J X>X
=
H
J
`
?
E
?
?
b
%
J
?
s
m
?
?
$
{
`
?
?
?
?
?
\
b
?
d
?
?
n
?
{
d
2
?
m
?
?
? b ? n ? j
? ? ?
]
?
?
?
*
?
?
?
?
??
?
?
? ? ? ? ??
_
fe
[hg
5765
p"q
o
r
u
v
" !$#&%'(*),+.- /10325
43687:9;6=<?>=@ 7BACs 0 >CD 0EA?FEGIH 6=JLKNt MPOQMSR,T
| ?Q??? | q |???:?? |?j???? ^
UCVXW$Y[Z]\Z %^_`Cacb"dfehgjikmlmnpo iQqCrtsvu*wjxjl e=y{z}|om~* d MjO u ^ e:gm? |??? q
e?m?p?? wj? ? iw ? i ??? q | n?? i qQ?
??? )? ? )?? | o b k |? ? ? iq:?i x eq ??c?1?
?????]%?"? ) '??#.?N?
?
?,?I? |???E? ?? ????&??3? w???? ? ?? |? x]??? ^Q?}? gju ^I? n ?j? 5? ^ om??? ? ? r ??? n |? e?g ?
?j??? iQq?iw e x q ?o$|?? o u ~*? ? u?i dQ? w ? euL? uS? i?? e ? ? gji ? r??? | w]e ? i q ?? ? ? q rwc?j?N??? ? ? ?*????S? ??? e r ?j?
?m? J?? q inp? d? i? |?? ? ? i d x ? d u e???i?? ?:?&?????? ??
mn
g
?Q?C??
J:?C??? ? T e:g ? ^ ie ? ??
??}?Cie?j? ???1?
J ?q i n???? |
x ? d k
? i? ? ? J ? ?? ?
? ? ? i?k q ie?! ?"
#%$ & ?(' ?)+* ,. ? J/ ?0 1 ? e:r ? x ? d u e u ?i ? iQq ? ? xm? q ? w1?32 ? qCi | ? g ? ? x l ?(| q? |om? ?547698:
MJ e(KL M ? ? ?ON
J |<;>= s@? K A JCB%D ? ? * EGF
?IH??
MT ? ? ?? ?+U ? M K 6 ? ? ? ?V | ?XW ? g | w
? B 6 er ? ? 4 DY
J o T>P Q d ? u sR a A J B 6 ; ? * SF
? ? ?
??? T P Q d i ? i[Z i ? i]\ D J l x ? |[^ ? K`_ba qcXd ?e<f??g N
h ?jilkm ? u n ? ?n k
e:gji o upoc?rq ?C?<s&o$utwv ?xzy k ? u"k ~*?
p??| ?3? k u ? u | uS? ?|{ e r ? }
P d ? u~ ? ? ip??i ? ?? ?? ?<? ?? ???
J ? ?>
J o T ? sE? ? i ? ? ?<? ?s ? ? ? ? ?8? * S O V d i ??? ? {?| k ?
r er?d ? i ???G?
? | $
J ? ??? ??? U ?O??? r ? r ? i x?
??(? i ??? ? ?? ? ???? e?? ? J ? g | e ? d5i K S M ?? $?V?(??M? ? ?
h
? b?? l q ? ?<????fk | ~ ? ? q b ? g ~
i
j
k
l
s ? q ~"?| ![u?
d ? w
? ?u? b kX? ??x iq ? ? ?:? ? w d?
? ? PfdBe[? ? | e ? w
egmiu?}i?
geC?,? l i?d J d ?
? d? $r?
?C???? ?? ?C?<? i |? g O? ? ? ? i ?(? iw??:gmi ??
s ??O??? ? ? ? x ?Qq? i x eq ? w d o ? d?u n ? ? ?? w?
? X
?? PfdBe b n | eu*w
? g ? ? g y?? ?? ~ i d o i ~S? k
? ? g
ide b n ? e ? w
???? i ~ | e ? ? w d o i ? ? i i ? ? b ?? q:iw
j? P d ? u ? ? ?b w
? gji?q:ik ? q ? | ? ??? ? ? ehgjq? g r ~ ?
?e ? ? w | w ? ?? ?
w ? Mm? ? ? g ? ? g ?~ ? ? ? ? i ? ?
g u?d ? d |
|? w ? rw io ??? ? ? n |
bd
i W e r
?
i ? s"D@lm? w ? ? u ? w D q?f? g l$wm? i?Q? q ?
?f? 4
w
?4
? ? ?O??? ? ? q ? ? ? q r w? ???? ? ^
? ? ??| om~ ? ^ ?
(
? ? egjipi | ? g ?X? d ??:? ?i ??| q
i e
,? w ? x iq ? x ? q ? w ?
e ?? ? ? e:g ?
? ?? ?
? ? j
6 ?) ? ??
? ? ?mi o?
? i q ? i??
?g |? ?<???*? ? b w??Z*i ?p? | u"d | ^ e? |? ?,g ??? ? q? | q ? ?iQ? q ??? ? |? u ? w r s ?? ? n | ?j?
??? [
Y Y???? / ????????? ?G? ? ? ? ? ? ?
? ? ? ? ? / ?? ? ?
? 2 ?4& ?? ? ? ? ? M
%
6 * ?,-/. 2 $ ?0|?1) ? ?G? ?|i ? ? ? ? ? 3
? & (?) B +
'
? 8 ? 2 M ' ?49 F:<; ?
7 ? $
u ? ? D ?? ?
? B A J 476 T"=
>
? ?A@j?
? ?
?DC
E sK D ??
?H
F %#&#HGJIKjZ) ? + s ? q ? 6 ? ?
?4L? q ?NM ? i5e:g i ? u dO ?QP wR u ? w r s 'S f ? i q ? i ? ? q?rk ^
? gmi?w M0 dVU ?
e g i b w ?Z ?jd ? ? k 7 i ?j?:? l d?u ?w x q ? xm ? ^ ?XW ? egm??s | ? eIe ? ? eZYA[q 1
T : f
W r w ? e?d:g |? ? ? w ? y ? | o ? i d M
m
J ? ? ? J`_ ?? ? ?:? J ?ba ? ?? ?G? B 6 ??dc ? ? i ? J ? ? ?fe B 6 ??
? a A J 4 6 T ??
?
?"! ?$#
J ? ?6
? 5 ? ?
??
'
? ? ? ? ?? ? e
xt?/\?? ? ^ ?^] A
5
;
!"$#% &' #(
) "+*,.- /0 - * " '
5SRT7U
VW F
57698:$;<>=? @BA <CEDFHG :8 $: ;J<I =K @ML <ENPOO 4 QX
YZ\[^]`_badc$egfih : W U Ojgk cl kmon jp kPqsrt\u jv f f t awf D!x 8 :y <C D F 4 = x 8:z{< = O 4 5 ad|b}
j~ k q???uwqJq a ???
? u q?? a?? u?? a?c u n?? ? } u??u?????u?????mHu e?? t f?????? u n???? p u }.?B? ? rU ????r n?\? n?? f ???
? t ????w? ? f
m?? c? ? ?M? ?b??Z.??? ?tb? r 5 4?
j k? q???? k f } ? ?[ t awf m [?? ? ? ??? e au?? c??
x ???P? a Z }?[ ? x4 Z ? ? X?f ??
a
?
?? u?? ? ? r k u r ???? ?.u m t ??? t??`?
rf p } ? ???(e$} ?
? ? ??? f t
?
rl adt\?u?? ??
q
^
u
p
?
?
k
?
?
w
?
?
a ? a`? ad? ?u ? ? a?h?? L ? f t\u!? p ? ? }? p ??? f t\u ? ? pp u k | qu? ? p u ? u u??(t\ud?9? ?w? ??
Z k f a ?k f t\uwp ? a??a? ? u ? ? h ???? ? ?\?wc k Z\? ? ? L? n ? u p ? ? rp9k Z7??? u a ? ? uw? c f k f t\u jgk cc ? mo? ? ?
? u q q a m t\u??u m u? ? q ? rt a?f ??? ??? p?`? h ? u } ?\????? ??? ??? e ??$? u??k ? ?
"!$#&%('*) ,+.- / )021 43 5 6 87 0:9<;>=@? BA ; CEDBFHG JILKNM #&O
P,Q ,R?T? S
UTV X
A W &; Y$ZBF \[ ]E: ? ^`_ \[ V"a G cb 0 cdefKJgchBiXj e kl2m 6 \n po(q %r g R 7ts
uTdv
4
? 6 V L ? V `? x O x ? 6 V L ? V a O <I
?
w
4G
? j ? 4? xpy{z CED F}|v~ V a? ; Y D F
? j V ? xp? z C D F | X"? V a@? ;? Z F
??? ? ,? ? V ?4?2y ;C$D?F |p? V aJ? ;?cD F
? ? u??u Z? t? ??? k j k j f ? e ? ? q??bk ? adZf ?gudq q a m e ll a ?? u a p?? Z f t u j _? c ? a ? ?^p?
?
???"? uu ? fe ? ?r u r? ? ??p? [ ? ? r e k??(rk m e f t e ? a n ???? e uw? r ? p uf? ? ? e k ? ? rt\????? ]EvUE ; ?? ?
? rN?2? ? ?k? ??>? ?$? c [ q?? a?? ? u Z ? ?? ?? ? r ? } u ? e ?u?mot? ? t ? ~ eE? ? ? ? n a ? u ek? rt ? ? a qu??Mu p !
? ?? r ~ k Z ?
? ? [ ?c? ? ? f?[ ? ? ? rku ? r ? ? a?f u rtu ? eE? k j ? a ? t ? [ p ? [ ? r p k ? ? ? u f ; ? u a ? ??? ? ? r p k ? ?4?
a Z ? ? ? r z?? ? u f ?bu ? ?? ? u ? r pk Z k ? r a? Z?w} j p9k q z ?h ? u f r??`? ? ? r ? p u?? ?>? ? e`? u } ? ? ? f ? Kp?
??? ??? ? f? Z ? z? ? ?\? ? ?
= U K zv? [ <C c ? = Ooj k.K p _ } ?c? u f ? u ? adff
? l ? h ? u } k Z u ?h ? ? q ? ? U h?????? q a a ? Z e ? ? 86 ? e4<>
?
?
Z
a
k
?
?
[
J
}
?
?
c
?
?
r? u r n ? t t
t ? u ? t\k ? ?v? <I ??? ?B?E?$?g? ? ? t\?v?
?2? ? k?? z ? ??? p u kv?
8? z.<? = K <??? O ?4?? = x ? ?>? F??,??? ? uB? ?{??? ? ? ? ? r?? ??? ? ????v?`? ?? O?
mE j [ BK
? e Q (?r
?J????? ? dp rrJ
j z
>,r P j U$ R
%$
'&
"
!
#
)( +* , .- 0/
1 2435 6879:);<3>=@?=A BDCFE =HGJIKML 7ON8P
?RQ ? ~TS Z ? uf? a ? e ? ? ? uVU ? ?+W p ? r t? ? kp X8Y[Z c u a]\ ? Z ? ? e Z ?+^ ? ? ? Z0_ h ? u p ? ? r p ka` adZM}
Zb ?\?dc0_ h _
l ? ? u p ? [ u ? f ?p ? Zf} e\ZZ ? ? Z ?? ? l ? e Z ? au p _ r t\u _ Z ? ??kh?pong q e }a ? ? r }J? r ? _ s r e ?? ? ? j u?? FOa v q _? wl ? ? = ? f t ? a?t f a ? ? O
?r t ?? u
rr p =thtu ?
? ?
?i pkj ? p ? k u ? ?it ru ? r p ? 6nm
? ? Z?Z k r a ? k ? ? rit [ r p a? Z e ? ? }.enx ? _? ? ? ? ? ?E? ? ?zy ? h ? k Z ? e ? ? p ?? ? k Z lh ? u p h ? ? q ? l u!? u ? ? ?
ZQ ? ? k){| ?]}>~ ? ~op u ? _ l r ? 6 ?" ? ? :
? a p ? t a c }{? Z ?? k? ? a ? t???? F ? ? ? ??? f rt a r rit u ? k q !
? ??Z ?dr e k Z k j ? ?c? ? c u ?u r m%k ?k? ? ?? ? p [+? ?b? ?c u ? ? ? rT? ? _ ? ? ? Z ? ? ?dh ? u ? ? u } r k? t e u ? u
? c y ? ??? ? k ? ? _ ??V????
? ? u p ? _ l f ? p u ? kp r ?w} ?u+{ [ _ u ? a ? ? j%? k Z ? k ? ? ? p ??? _ ? t a ? ? u }?a?f ? ? ? u?? r adZb}e ??
??? ? _\?? 6 ? ek? ]z )\) ,?? ? u p ? ?.e r p a k??? ? Z [f m r k pT? ? ? 6??
? a Z ?7?a]kM? ? | u?]T? >? ? ] ?%f = t\? u ? =+j _ ?Tl? c ? ? ta ?? ud[ p? ? r u Z ? ? k Z ? j rT? ? ?
? ? ? r k.?k ? u?? q ?0? ? r ? u m k p | m ?? ?
123
!#"
$&%')(+*,.-0/214365798:6;<
=?>+@BADC
EFHGJIK=?LM=NFOQPR6G ITS&UWVQXYY6Z[]\ ^_a`]bcedgfih FjkWl @nmporqtsuv2owxHyz6{ V
| Et}~ P??N?? ? j???PQ?Q\ ? ??0? U???? F?6??P??.????n?N? ?? ?t???? ? X? E!?D????]??? ? OQ? | ?.??????
?????????????? w G ??????ah G?? ? b ????f?f]? ? ??
? EtO ?Q? ???? ?X Y YA2? ???? ? l????? h F ?? `]?
o s? oo2?
?
??56?/ : %/2<
? q#???]?JF ?X O d >M?Q?????2???J?E&?g?? | !Z ? {???Fk ? A ? V ? Z ? ? U ???0?2Z ? ???? ? ? ?QO ? ^ X ?? Zj9???6? ?
???A ???? ?6???????? ?????6????g??????? A 2q ?
?
L
? s ? ?eF O c
? * ? ? Z%, F =.-/10243F ? ? O ` E- |65 ?eU'7??98 |:
O ? q "! # %
E ( F Y +
? ?? U
b $ 'O &?)
? BDC+EFHG ? IJ'KMLON ?Q? P
S
? RM+
A QP q ? ? ?? T6U)qDq#uL
? ;=0 ? ?
<
? > ? S ? ? E 4? @ R ? ? A ?
? V6WYX<Z X\[ E?O?? ? ? c ^
? q ? ? ?l # #nm Eo ? F ^ ?4p??4q _
? ] E ? , E X P R 0??`_ ?ab FHc d E O)egf G ?.h F i ? k ?
kj
A
?
? ? ? 0r E Ms ?? u? t? ( ? E | v Z ? E ?^w { ? e ? ? EHx ? A ?y
?}|~ ? R Z ?S?
u ?? w2x
A
Az{
? ?? ? FH? ? O X ?.D
y ?
Z ? X A - ??? y ?%????1FDxM?9? F%?
d X^&f ? ? q ?+??+ ???J@ X { ? E ???4Z - | $ f ? tE O? +
LM?????? ? AC ?
+? ?
o ?
w ? ??
j
??? ?}? ? ?^?KP?Jy K ?}?
? E F???? ?}?
? P E??2?QO A ? ? E ? ??A+?? ??
? EQ
? Z ?
cY
? ?
? A?"??? ? ?#P ? ? ? f A? ? X ? ? ? ? y E P
???? ? ; ? @ K
?
??F ] ? ? ? ? ? ???D?Qq6? ?2q? ! # m?Eo.? ? ?? ? Z? A v E | F?? ; ? @
?
??=? F ??? ???? ^ E?? R?
??? X EH?+E ? ? ? y d ? s?W?? f E ? ? E |
A
? ? A ?
? ?? ? O ? ?? ?.? ??=?H? ??? ?1? ??`???'?g? ? ?????? ? A???FP ?^?
? U
A ? ?^
? ? ?^?)?6s ? ? ? ????? m?? ??? ? m ? ? ?? ? F? ? p? Z?? F
? ? ? ? ?g?<? F%? H
F ? ??? N ????? ? ?^C
? A?? X C??
A
? ? ? ? E ?+A O ? ? ? !
A
?
T "? ??j$#^ &? %'
? ?? ? O #)?( b ? ?Qq ? ?q l #n# f A2?
?
N ;=<?>
:A@ ? ?^ A ? ? ? qq ? s ? o?
??6? 798 : !
{
? C? B ? FOP ? G ??E
@ A C?E F?? ? ? ? ??s ?
? DF
F ?2? E F O ? ?LK N ?NM F P? O ?
??9? L? ? ?? T
?}| I
?
?? # l ? ?QR?? ? F ? ? ? ^ ?????0??? A , ???/?E ?
?
G
? e? ? I F ?Y| F
G 6
C ?
F ?
C F
???k+*-, .?X ? ? V A?/1032?F4
??
?
>
m EG ? ? ?IH b ? ? ? ? U ? E
? l ? ?
^ ? ? ?!? O ?
?
- ? ?
F 3 (
d
m s ? ? C ? A?@-Q
?PR FS
? ? O eUT? qt?+?? l #WV ?YX O U?t? ?[Z ? ? R2A ?]\ AOY ? S ? ? ? @U^ ? ? ? , X
?
)
?
b
O
a
`
E
O
?
_
?
R
X
\ A ?dc ? ?O ? ^X ? ? A? ? ?fegih P R ? ? P ? k
?
C j ? A ? ? ( b f ? ( dml 0 ? ?on&qs b GK?
A
E x ?3y ? Z , reF ? ? \48 OQ? ? F]F ? b ? ??
q ? ? ` 4A?)r AIs??utD? ? O?? ? l ? \ AO ? Fv ??A ? b P ?3w ? P E ? ? ? ? ? C
? q{z W m ?
?QD
? ?}|
F~ ??? ? ?}????f? f ? q ????? #WV r A%? ? X ? ?t? > A?6? C /103?0? ?&?6? ? ? A? ? Z ?Y?1? E ??
? ??
? ? E ? ? ? ? u?!??? ?
? ? ?
?? ??
Z ?1?? ? ??C??
?&? q? ? ? ??? ? ????? @
? ? ??
????Y? q ?? ? s
??W?????m??? ??
A \
0
? E
? ? ? ??
j ? ^
FC ? F ? ? m _ X ?
? ??? ? ? ?-? ?1? ?2? v A O ? ????q?? ? l!?
h
? ???N<N?Y?
? ? m
?I??E1I ? ?
A O @ ? ? ? ? ?? ? F ?
???? G?? ???? ? F
??? W
?? V B t
h ? ? ?t?6?
F ?<?17?
??? ? ? _
?Ds X \ ? ? ???
??
? ???-???
? F ?
?}|??
F? ?? 7 N]?? ???$?
?
?
A
? E65
A ?)P E??5-J ?
y
A
? Z
A
? ? ? {?Ax
F m
f p LL
F f p
?? W
z
K
@Y?
Z ?
A ?3 ? u
F E O J6? F??
T
F :
? b ?? ? ? ?q ? # #??]?L? ? ?t?C? ? ? ]
H ? ? ? ? ? ? h?? ?4@?? ? \ Z ? ? ? ? AI? +k *?? ?)P ? ? A ? ?Q? ? ? ? ?
| ???
?9? Y
? ? F ?+O J{?I? G Fo?Y?F ??? C ? ?)??? ? F ?o???A? ?
?? Fo? Ig? s ? T?? s ?? ? ? D
A O @ F?
?
| 644 |@word agf:1 km:1 hu:1 tr:1 v2o:1 n8:1 bc:4 ala:1 o2:1 ka:1 wd:2 chu:1 bd:3 fn:1 gv:1 e65:1 wlm:1 d5i:1 rts:1 lr:2 vxw:2 ooj:1 ry:1 phj:1 anj:1 q2:1 acbed:1 y3:1 nf:1 rm:1 hkm:1 ap:1 kml:2 au:1 lop:1 bi:1 qdq:1 vu:4 eut:1 z7:1 oqp:3 py:1 go:1 l:1 onqp:1 qc:2 s6:1 mq:1 tht:1 qq:3 pt:1 gm:2 enx:1 jk:1 gon:1 cy:2 wj:1 eu:1 mz:3 rq:4 cyx:1 mu:1 pong:1 po:2 mh:1 kp:2 sc:1 adt:1 mon:2 s:1 wg:1 ip:1 a9:2 mb:2 gq:1 zm:2 p4:3 utv:1 kh:2 kv:1 az:1 eto:1 qr:1 zts:1 iq:1 ac:1 qt:1 op:2 c:1 qd:1 vc:1 owl:1 pkj:1 mm:1 ic:1 aga:1 k3:1 cb:1 bj:1 mo:3 lm:3 a2:3 fh:1 him:1 nwn:1 vz:1 wl:1 tf:1 i3:1 eys:1 q3:1 sqq:1 xhy:2 nqu:1 hkx:1 vl:1 a0:1 fhg:3 xpy:1 q2m:1 f3:2 zz:1 x4:2 np:1 dml:1 mut:1 ab:1 zdz:1 hg:3 xb:1 kt:1 egfih:1 foa:1 bq:2 e0:1 vuw:1 mot:1 acb:1 sv:1 st:1 ie:1 bu:1 e_:1 yg:1 w1:1 nm:5 dr:1 tz:1 cxd:1 ek:2 f6:2 gje:1 r2a:1 kwl:1 cedgfih:1 ad:2 wm:2 b6:1 om:4 v9:1 t3:1 hbj:1 vp:1 zy:3 rx:1 bdcfe:1 eie:1 j6:2 ah:1 wed:1 fo:2 ed:2 ty:2 uip:1 npo:1 mi:2 ut:5 ea:1 ok:1 ta:1 iaj:1 ox:1 d:1 xzy:1 qo:2 aj:1 b3:1 wlz:1 q0:1 ll:2 anm:1 d4:1 m:1 o3:2 tt:1 cp:1 ef:1 fi:1 ji:2 qp:2 rl:1 ai:2 lja:1 fk:1 pq:2 v0:1 gj:1 j:1 wv:6 qci:1 xe:1 ced:5 ud:2 rv:1 rj:1 qpsr:1 ghs:1 jy:1 baa:1 bz:1 q2n:1 utd:1 sr:2 hz:3 oq:2 ee:2 vw:1 mw:1 hb:1 bid:1 w3:1 cn:2 epw:1 t0:2 ul:1 e3:1 yw:1 se:1 j9:1 gih:1 vy:1 wr:1 hji:2 lnm:1 awf:2 zv:1 pb:1 d3:1 f8:1 uw:3 j2u:1 wu:1 dy:1 def:1 ehx:1 g:1 i4:2 bv:1 hy:1 syt:1 wc:2 gmi:2 bdc:1 uf:2 tv:2 vkj:1 ikj:1 dv:1 pr:1 jlk:2 ho:1 jd:1 cf:1 a4:2 qsr:1 xw:2 yx:5 xc:1 bl:2 jgk:2 clc:1 x8y:1 rt:3 nr:2 prq:1 dp:1 utwv:2 nx:1 me:1 w6:2 z3:1 gji:3 ql:2 fe:3 gk:1 ba:9 zf:1 t:3 dc:5 ww:1 rn:1 wyx:2 bk:2 hgji:2 wy:1 xm:2 bacs:1 tb:1 rf:1 eh:2 co6:1 egi:1 mn:2 lk:5 gz:1 gf:3 kj:1 zh:2 oy:2 lv:1 xp:1 s0:1 dq:1 cd:3 ngf:1 ig:1 ec:2 qx:1 cf6:1 sz:1 ml:2 e1i:1 kmlon:2 un:1 zq:1 z6:1 mj:1 zk:1 cl:1 uwv:1 n2:2 ref:1 tl:1 lc:1 adz:1 lq:1 sf:2 yh:2 cqc:1 ix:2 hw:1 z0:2 e4:1 hjn:1 xt:1 badc:2 dk:1 ih:4 ci:1 te:1 kx:1 mf:1 tc:4 fc:2 ux:3 bo:1 aa:6 dh:2 jf:1 n2j:1 egf:2 w2x:1 rit:2 wq:2 h21:1 ub:1 c9:1 |
6,014 | 6,440 | Avoiding Imposters and Delinquents: Adversarial
Crowdsourcing and Peer Prediction
Jacob Steinhardt
Stanford University
Gregory Valiant
Stanford University
Moses Charikar
Stanford University
Abstract
We consider a crowdsourcing model in which n workers are asked to rate the quality
of n items previously generated by other workers. An unknown set of ?n workers
generate reliable ratings, while the remaining workers may behave arbitrarily and
possibly adversarially. The manager of the experiment can also manually evaluate
the quality of a small number of items, and wishes to curate together almost all
of the high-quality items with at most an fraction of low-quality items. Perhaps
surprisingly, we show that this is possible with an amount of work required of the
manager,
and each
worker, that does not scale
with
n: the dataset can be curated
1
1
?
?
with O ??3 4 ratings per worker, and O ?2 ratings by the manager, where ?
is the fraction of high-quality items. Our results extend to the more general setting
of peer prediction, including peer grading in online classrooms.
1
Introduction
How can we reliably obtain information from humans, given that the humans themselves are unreliable, and might even have incentives to mislead us? Versions of this question arise in crowdsourcing
(Vuurens et al., 2011), collaborative knowledge generation (Priedhorsky et al., 2007), peer grading
in online classrooms (Piech et al., 2013; Kulkarni et al., 2015), aggregation of customer reviews
(Harmon, 2004), and the generation/curation of large datasets (Deng et al., 2009). A key challenge
is to ensure high information quality despite the fact that many people interacting with the system
may be unreliable or even adversarial. This is particularly relevant when raters have an incentive to
collude and cheat as in the setting of peer grading, as well as for reviews on sites like Amazon and
Yelp, where artists and firms are incentivized to manufacture positive reviews for their own products
and negative reviews for their rivals (Harmon, 2004; Mayzlin et al., 2012).
One approach to ensuring quality is to use gold sets ? questions where the answer is known, which
can be used to assess reliability on unknown questions. However, this is overly constraining ? it
does not make sense for open-ended tasks such as knowledge generation on wikipedia, nor even for
crowdsourcing tasks such as ?translate this paragraph? or ?draw an interesting picture? where there
are different equally good answers. This approach may also fail in settings, such as peer grading in
massive online open courses, where students might collude to inflate their grades.
In this work, we consider the challenge of using crowdsourced human ratings to accurately and
efficiently evaluate a large dataset of content. In some settings, such as peer grading, the end goal
is to obtain the accurate evaluation of each datum; in other settings, such as the curation of a large
dataset, accurate evaluations could be leveraged to select a high-quality subset of a larger set of
variable-quality (perhaps crowd-generated) data.
There are several confounding difficulties that arise in extracting accurate evaluations. First, many
raters may be unreliable and give evaluations that are uncorrelated with the actual item quality;
second, some reliable raters might be harsher or more lenient than others; third, some items may be
harder to evaluate than others and so error rates could vary from item to item, even among reliable
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
raters; finally, some raters may even collude or want to hack the system. This raises the question: can
we obtain information from the reliable raters, without knowing who they are a priori?
In this work, we answer this question in the affirmative, under surprisingly weak assumptions:
? We do not assume that the majority of workers are reliable.
? We do not assume that the unreliable workers conform to any statistical model; they could
behave fully adversarially, in collusion with each other and with full knowledge of how the
reliable workers behave.
? We do not assume that the reliable worker ratings match the true ratings, but only that they
are ?approximately monotonic? in the true ratings, in a sense that will be formalized later.
? We do not assume that there is a ?gold set? of items with known ratings presented to each
user (as an adversary could identify and exploit this). Instead, we rely on a small number of
reliable judgments on randomly selected items, obtained after the workers submit their own
ratings; in practice, these could be obtained by rating those items oneself.
For concreteness, we describe a simple formalization of the crowdsourcing setting (our actual results
hold in a more general setting). We imagine that we are the dataset curator, so that ?us? and ?ourselves?
refers in general to whoever is curating the data. There are n raters and m items to evaluate, which
have an unknown quality level in [0, 1]. At least ?n workers are ?reliable? in that their judgments
match our own in expectation, and they make independent errors. We assign each worker to evaluate
at most k randomly selected items. In addition, we ourselves judge k0 items. Our goal is to recover
the ?-quantile: the set T ? of the ?m highest-quality items. Our main result implies the following:
? 12 )
Theorem 1. In the setting above, suppose n = m. Then there is k = O( ??13 4 ), and k0 = O(
?
such that, with probability 99%, we can identify ?m items with average quality only worse than T ? .
Interestingly, the amount of work that each worker (and we ourselves) has to do does not grow with
n; it depends only on the fraction ? of reliable workers and the desired accuracy . While the number
of evaluations k for each worker is likely not optimal, we note that the amount of work k0 required of
us is close to optimal: for ? ? ?, it is information theoretically necessary for us to evaluate ?(1/?2 )
items, via a reduction to estimating noisy coin flips.
Why is it necessary to include some of our own ratings? If we did not, and ? < 12 , then an adversary
could create a set of dishonest raters that were identical to the reliable raters except with the item
indices permuted by a random permutation of {1, . . . , m}. In this case, there is no way to distinguish
the honest from the dishonest raters except by breaking the symmetry with our own ratings.
Our main result holds in a considerably more general setting where we require a weaker form of
inter-rater agreement ? for example, our results hold even if some of the reliable raters are harsher
than others, as long as the expected ratings induce approximately the same ranking. The focus on
quantiles rather than raw ratings is what enables this. Note that once we estimate the quantiles, we
can approximately recover the ratings by evaluating a few items in each quantile.
items
r?
true ratings
1
0.8
0.6
0.4
0.2
.8
.9
.7
.8
.6
.2
.5
.1
A?
?
.9
good raters ?
?1
?
random ?
?1
?
?1
adversaries 1
0
1
0
1
.8
.8
.6
.6
0
0
0
0
0.1
?
.4
0?
?
?
0?
? =?
?
1?
1
T?
M?
1
?
1
?1
?
?
?1
?
?
?1
1
1
1
0
0
0
1
1
1
1
0
0
0
0
0
1
0
1
1
1
0
0
0
0
0
0
?
0
0?
?
?
0?
?
?
1?
1
Figure 1: Illustration of our problem setting. We observe a small number of ratings from each rater
(indicated in blue), which we represent as entries in a matrix A? (unobserved ratings in red, treated as
zero by our algorithm). There is also a true rating r? that we would assign to each item; by rating
some items ourself, we observe some entries of r? (also in blue). Our goal is to recover the set T ?
representing the top ? fraction of items under r? . As an intermediate step, we approximately recover
a matrix M ? that indicates the top items for each individual rater.
2
Our technical tools draw on semidefinite programming methods for matrix completion, which have
been used to study graph clustering as well as community detection in the stochastic block model
(Holland et al., 1983; Condon and Karp, 2001). Our setting corresponds to the sparse case of graphs
with constant degree, which has recently seen great interest (Decelle et al., 2011; Mossel et al., 2012;
2013b;a; Massouli?, 2014; Gu?don and Vershynin, 2014; Mossel et al., 2015; Chin et al., 2015; Abbe
and Sandon, 2015a;b; Makarychev et al., 2015). Makarychev et al. (2015) in particular provide an
algorithm that is robust to adversarial perturbations, but only if the perturbation has size o(n); see
also Cai and Li (2015) for robustness results when the degree of the graph is logarithmic.
Several authors have considered semirandom settings for graph clustering, which allow for some
types of adversarial behavior (Feige and Krauthgamer, 2000; Feige and Kilian, 2001; Coja-Oghlan,
2004; Krivelevich and Vilenchik, 2006; Coja-Oghlan, 2007; Makarychev et al., 2012; Chen et al.,
2014; Gu?don and Vershynin, 2014; Moitra et al., 2015; Agarwal et al., 2015). In our setting, these
semirandom models are unsuitable because they rule out important types of strategic behavior, such
as an adversary rating some items accurately to gain credibility. By allowing arbitrary behavior
from the adversary, we face a key technical challenge: while previous analyses consider errors
relative to a ground truth clustering, in our setting the ground truth only exists for rows of the matrix
corresponding to reliable raters, while the remaining rows could behave arbitrarily even in the limit
where all ratings are observed. This necessitates a more careful analysis, which helps to clarify what
properties of a clustering are truly necessary for identifying it.
2
Algorithm and Intuition
We now describe our recovery algorithm. To fix notation, we assume that there are n raters and m
items, and that we observe a matrix A? ? [0, 1]n?m : A?ij = 0 if rater i does not rate item j, and
otherwise A?ij is the assigned rating, which takes values in [0, 1]. In the settings we care about A? is
very sparse ? each rater only rates a few items. Remember that our goal is to recover the ?-quantile
T ? of the best items according to our own rating.
Our algorithm is based on the following intuition: the reliable raters must (approximately) agree on
the ranking of items, and so if we can cluster the rows of A? appropriately, then the reliable raters
should form a single very large cluster (of size ?n). There can be at most ?1 disjoint clusters of this
size, and so we can manually check the accuracy of each large cluster (by checking agreement with
our own rating on a few randomly selected items) and then choose the best one.
? any two rows of A? will
One major challenge in using the clustering intuition is the sparsity of A:
almost certainly have no ratings in common, so we must exploit the global structure of A? to discover
clusters, rather than using pairwise comparisons of rows. The key is to view our problem as a form of
noisy matrix completion ? we imagine a matrix A? in which all the ratings have been filled in and
all noise from individual ratings has been removed. We define a matrix M ? that indicates the top ?m
?
?
items in each row of A? : Mij
= 1 if item j has one of the top ?m ratings from rater i, and Mij
=0
?
otherwise (this differs from the actual definition of M given in Section 4, but is the same in spirit).
If we could recover M ? , we would be close to obtaining the clustering we wanted.
? using (unreliable) ratings A.
?
Algorithm 1 Algorithm for recovering ?-quantile matrix M
1: Parameters: reliable fraction ?, quantile ?, tolerance , number of raters n, number of items m
?
2: Input: noisy rating matrix A
?
3: Let M be the solution of the optimization problem (1):
? M i,
maximize hA,
subject to 0 ? Mij ? 1 ?i, j,
P
j Mij ? ?m ?j,
(1)
kM k? ?
where k ? k? denotes nuclear norm.
?.
4: Output M
3
2p
??nm,
?
?.
Algorithm 2 Algorithm for recovering an accurate ?-quantile T from the ?-quantile matrix M
1: Parameters: tolerance , reliable fraction ?
? of approximate ?-quantiles, noisy ratings r?
2: Input: matrix M
3: Select 2 log(2/?)/? indices i ? [n] at random.
? i , r?i is largest, and let T0 ? M
? i? . . T0 ? [0, 1]m
4: Let i? be the index among these for which hM
5: do T ? R ANDOMIZED ROUND(T0 ) while hT ? T0 , r?i < ? 4 ?k
6: return T
. T ? {0, 1}m
The key observation that allows us to approximate M ? given only the noisy, incomplete A? is that M ?
has low-rank structure: since all of the reliable raters agree with each other, their rows in M ? are all
identical, and so there is an (?n) ? m submatrix of M ? with rank 1. This inspires the low-rank matrix
? given in Algorithm 1. Each row of M is constrained to have
completion algorithm for recovering M
?
2
sum at most ?m, and M as a whole is constrained to have nuclear norm kM k? at most ?
??nm.
1
Recall that the nuclear norm is the sum of the singular values of M ; in the same way that the ` -norm
is a convex surrogate for the `0 -norm, the nuclear norm acts as a convex surrogate for the rank of M
(i.e., number of non-zero singular values). The optimization problem (1) therefore chooses a set of
? while constraining the item sets to
?m items in each row to maximize the corresponding values in A,
have low rank (where low rank is relaxed to low nuclear norm to obtain a convex problem). This
?
low-rank constraint acts as a strong regularizer that quenches the noise in A.
? using Algorithm 1, it remains to recover a specific set T that approximates
Once we have recovered M
the ?-quantile according to our ratings. Algorithm 2 provides a recipe for doing so: first, rate k0
items at random, obtaining the vector r?: r?j = 0 if we did not rate item j, and otherwise r?j is the
?
(possibly
P ? noisy) rating that we assign to item ?j. Next, score each row Mi based on the noisy ratings
?j , and let T0 be the highest-scoring Mi among O(log(2/?)/?) randomly selected i. Finally,
j Mij r
randomly round the vector T0 ? [0, 1]m to a discrete vector T ? {0, 1}m , and treat T as the indicator
function of a set approximating the ?-quantile (see Section 5 for details of the rounding algorithm).
? we will first run Algorithm 1 to recover a ?-quantile
In summary, given a noisy rating matrix A,
?
?.
matrix M for each rater, and then run Algorithm 2 to recover our personal ?-quantile using M
? i for reliable
Possible attacks by adversaries. In our algorithm, the adversaries can influence M
raters i via the nuclear norm constraint (note that the other constraints are separable across rows).
This makes sense because the nuclear norm is what causes us to pool global structure across raters
(and thus potentially pool bad information). In order to limit this influence, the constraint on the
nuclear norm is weaker than is typical by a factor of 2 ; it is not clear to us whether this is actually
necessary or due to a loose analysis.
P
The constraint j Mij ? ?m is not typical in the literature. For instance, (Chen et al., 2014) place
? to lie in [?1, 1]n?m , which
no constraint on the sum of each row in M (they instead normalize M
recovers the items with positive rating rather than the ?-quantile). Our row normalization constraint
prevents an attack in which a spammer rates a random subset of items as high as possible and rates the
remaining items as low as possible. If the actual set of high-quality items has density much smaller
than 50%, then the spammer gains undue influence relative to honest raters that only rate e.g. 10% of
the items highly. Normalizing M to have a fixed row sum prevents this; see Section B for details.
3
Assumptions and Approach
We now state our assumptions more formally, state the general form of our results, and outline the
key ingredients of the proof. In our setting, we can query a rater i ? [n] and item j ? [m] to obtain a
rating A?ij ? [0, 1]. Let r? ? [0, 1]m denote the vector of true ratings of the items. We can also query
an item j (by rating it ourself) to obtain a noisy rating r?j such that E[?
rj ] = rj? .
Let C ? [n] be the set of reliable raters, where |C| ? ?n. Our main assumption is that the reliable
raters make independent errors:
Assumption 1 (Independence). When we query a pair (i, j) with i ? C, we obtain an output A?ij
whose value is independent of all of the other queries so far. Similarly, when we query an item j, we
obtain an output r?j that is independent of all of the other queries so far.
4
Algorithm 3 Algorithm for obtaining (unreliable) ratings matrix A? and noisy ratings r?, r?0 .
1: Input: number of raters n, number of items m, and number of ratings k and k0 .
2: Initially assign each rater to each item independently with probability k/m.
3: For each rater with more than 2k items, arbitrarily unassign items until there are 2k remaining.
4: For each item with more than 2k raters, arbitrarily unassign raters until there are 2k remaining.
? denote the resulting matrix of
5: Have the raters submit ratings of their assigned items, and let A
ratings with missing entries fill in with zeros.
6: Generate r? by rating items with probability km0 (fill in missing entries with zeros)
? r?
7: Output A,
Note that Assumption 1 allows the unreliable ratings to depend on all previous ratings and also allows
arbitrary collusion among the unreliable raters. In our algorithm, we will generate our own ratings
after querying everyone else, which ensures that at least r? is independent of the adversaries.
We need a way to formalize the idea that the reliable raters agree with us. To this end, for i ? C
let A?ij = E[A?ij ] be the expected rating that rater i assigns to item j. We want A? to be roughly
increasing in r? :
Definition 1 (Monotonic raters). We say that the reliable raters are (L, )-monotonic if
rj? ? rj?0 ? L ? (A?ij ? A?ij 0 ) +
whenever
rj?
?
rj?0 ,
(2)
0
for all i ? C and all j, j ? [m].
The (L, )-monotonicity property says that if we think that one item is substantially better than another
item, the reliable raters should think so as well. As an example, suppose that our own ratings are
binary (rj? ? {0, 1}) and that each rating A?i,j matches rj? with probability 35 . Then A?i,j = 52 + 15 rj? ,
and hence the ratings are (5, 0)-monotonic. In general, the monotonicity property is fairly mild ? if
the reliable ratings are not (L, )-monotonic, it is not clear that they should even be called reliable!
Algorithm for collecting ratings. Under the model given in Assumption 1, our algorithm for
collecting ratings is given in Algorithm 3. Given integers k and k0 , Algorithm 3 assigns each rater
at most 2k ratings, and assigns us k0 ratings in expectation. The output is a noisy rating matrix
A? ? [0, 1]n?m as well as a noisy rating vector r? ? [0, 1]m . Our main result states that we can use A?
and r? to estimate the ?-quantile T ? ; throughout we will assume that m is at least n.
(L, 0
)Theorem 2. Let m ? n. Suppose that Assumption 1 holds, that the reliable raters
are
log3 (2/?) m
monotonic, and that we run Algorithm 3 to obtain noisy ratings. Then there is k = O ??3 4 n
and k0 = O log(2/???)
such that, with probability 1 ? ?, Algorithms 1 and 2 output a set T with
?2
?
?
1 ? X ? X ??
rj ?
rj ? (2L + 1) ? + 20 .
(3)
?m
?
j?T
j?T
Note that the amount of work for the raters scales as m
n . Some dependence on
we need to make sure that every item gets rated at least once.
m
n
is necessary, since
The proof of Theorem 2 can be split into two parts: analyzing Algorithm 1 (Section 4), and analyzing
Algorithm 2 (Section 5). At a high level, analyzing Algorithm 1 involves showing that the nuclear
norm constraint in (1) imparts sufficient noise robustness while not allowing the adversary too much
? . Analyzing Algorithm 2 is far more straightforward, and
influence over the reliable rows of M
requires only standard concentration inequalities and a standard randomized rounding idea (though
the latter is perhaps not well-known, so we will explain it briefly in Section 5).
4
? (Algorithm 1)
Recovering M
? that
The goal of this section is to show that solving the optimization problem (1) recovers a matrix M
?
approximates the ?-quantile of r in the following sense:
5
Proposition 1. Under the conditions of Theorem 2 and the corresponding values of k and k0 ,
? satisfying
Algorithm 1 outputs a matrix M
1 1 X X ?
? ij )A?ij ?
(Tj ? M
(4)
|C| ?m
i?C j?[m]
with probability 1 ? ?, where Tj? = 1 if j lies in the ?-quantile of r? , and is 0 otherwise.
? i is good according to rater i?s ratings A? . Note that (L, 0 )Proposition 1 says that the row M
i
?
monotonicity then implies that Mi is also good according to r? . In particular (see A.2 for details)
X X
1 1 X X ?
? ij )rj? ? L ? 1 1
? ij )A?ij + 0 ? L ? + 0 . (5)
(Tj ? M
(Tj? ? M
|C| ?m
|C| ?m
i?C j?[m]
i?C j?[m]
Proving Proposition 1 involves two major steps: showing (a) that the nuclear norm constraint in (1)
?C
imparts noise-robustness, and (b) that the constraint does not allow the adversaries to influence M
too much. (For a matrix X we let XC denote the rows indexed by C and XC the remaining rows.)
? , and B denote a denoised version
In a bit more detail, if we let M ? denote the ?ideal? value of M
?
0
?
?
of A, we first show (Lemma 1) that hB, M ? M i ? ? for some 0 determined below. This is
established via the matrix concentration inequalities in Le et al. (2015). Lemma 1 would already
suffice for standard approaches (e.g., Gu?don and Vershynin, 2014), but in our case we must grapple
with the issue that the rows of B could be arbitrary outside of C, and hence closeness according to
? and M ? . Our main technical contribution, Lemma 2,
B may not imply actual closeness between M
?
?
?
?
shows that hBC , MC ? MC i ? hB, M ? M i ? 0 ; that is, closeness according to B implies closeness
according to BC . We can then restrict attention to the reliable raters, and obtain Proposition 1.
k ?
Part 1: noise-robustness. Let B be the matrix satisfying BC = m
AC , BC = A?C , which denoises A?
k
?
on C. The scaling m is chosen so that E[AC ] ? BC . Also define R ? Rn?m by Rij = Tj? .
Ideally, we would like to have MC = RC , i.e., M matches T ? on all the rows of C. In light of this,
we will let M ? be the solution to the following ?corrected? program, which we don?t have access to
(since it involves knowledge of A? and C), but which will be useful for analysis purposes:
maximize hB, M i,
subject to MC = RC ,
P
j Mij ? ?m ?i,
(6)
0 ? Mij ? 1 ?i, j,
2p
kM k? ?
??nm
?
?
? is ?close? to M ? :
Importantly, (6) enforces Mij
= Tj? for all i ? C. Lemma 1 shows that M
3
m
such
Lemma 1. Let m ? n. Suppose that Assumption 1 holds. Then there is a k = O log??(2/?)
3 4
n
? to (1) performs nearly as well as M ? under B; specifically, with probability
that the solution M
1 ? ?,
? i ? hB, M ? i ? ??kn.
hB, M
(7)
? is not necessarily feasible for (6), because of the constraint MC = RC ; Lemma 1 merely
Note that M
? approximates M ? in objective value. The proof of Lemma 1, given in Section A.3,
asserts that M
primarily involves establishing a uniform deviation result; if we let P denote the feasible set for (1),
then we wish to show that |hA? ? B, M i| ? 12 ??kn for all M ? P. This would imply that the
objectives of (1) and (6) are essentially identical, and so optimizing one also optimizes the other.
Using the inequality |hA? ? B, M i| ? kA? ? Bkop kM k? , where k ? kop denotes operator norm, it
suffices to establish a matrix concentration inequality bounding kA? ? Bkop . This bound follows
from the general matrix concentration result of Le et al. (2015), stated in Section A.1.
Part 2: bounding the influence of adversaries. We next show that the nuclear norm constraint does
not give the adversaries too much influence over the de-noised program (6); this is the most novel
aspect of our argument.
6
MC = RC
MC
k?
??
B
hBC ?ZC , M ? ?M i ?
kM
?
M
BC ?ZC
M?
MC
Figure 2: Illustration of our Lagrangian duality argument, and of the role of Z. The blue region
represents the nuclear norm constraint and the gray region the remaining constraints. Where the blue
region slopes downwards, a decrease in MC can be offset by an increase in MC when measuring
hB, M i. By linearizing the nuclear norm constraint, the vector B ? Z accounts for this offset, and
?.
the red region represents the constraint hBC ? ZC , MC? ? MC i ? , which will contain M
Suppose that the constraint on kM k? were not present in (6). Then the adversaries would have no
influence on MC? , because all the remaining constraints in (6) are separable across rows. How can we
quantify the effect of this nuclear norm constraint? We exploit Lagrangian duality, which allows us to
replace constraints with appropriate modifications to the objective function.
To gain some intuition, consider Figure 2. The key is that the Lagrange multiplier ZC can bound the
amount that hB, M i can increase due to changing M outside of C. If we formalize this and analyze
Z in detail, we obtain the following result:
3
(2/?) m
Lemma 2. Let m ? n. Then there is a k = O log??
such that, with probability at least
2
n
p
1 ? ?, there exists a matrix Z with rank(Z) = 1, kZkF ? k ??n/m, and
hBC ? ZC , MC? ? MC i ? hB, M ? ? M i for all M ? P.
(8)
By localizing hB, M ? ? M i to C via (8), Lemma 2 bounds the effect that the adversaries can have
? C . It is therefore the key technical tool powering our results, and is proved in Section A.4.
on M
Proposition 1 is proved from Lemmas 1 and 2 in Section A.5.
5
Recovering T (Algorithm 2)
? satisfies the conclusion of Proposition 1, then Algorithm 2 recovers
In this section we show that if M
?
a set T that approximatesPT well. We represent the sets T and T ? as {0, 1}-vectors, and use the
notation hT, ri to denote j?[m] Tj rj . Formally, we show the following:
Proposition 2. Suppose Assumption 1 holds. For some k0 = O log(2/???)
, with probability 1 ? ?,
?2
Algorithm 2 outputs a set T satisfying
2 X ?
? i , r? i + ?m.
hT ? ? T, r? i ?
hT ? M
(9)
|C|
i?C
To establish Proposition 2, first note that with probability 1 ? 2? , at least one of the 2 log(2/?)
randomly
?
? i , r? i within twice the average cost across i ? C.
selected i from Algorithm 2 will have cost hT ? ? M
This is because with probability ?, a randomly selected i will lie in C, and with probability 12 , an
i ? C will have cost at most twice the average cost (by Markov?s inequality).
The remainder
of the proof hinges
P ?
P on? two? results. First, we establish a concentration bound showing
that j M
?j is close to km0 j M
ij r
ij rj for any fixed i, and hence (by a union bound) also for
the 2 log(2/?)
randomly selected i. This yields the following lemma, which is a straightforward
?
application of Bernstein?s inequality (see Section A.6 for a proof):
7
?
Lemma 3. Let
i be therow selected in Algorithm 2. Suppose that r? satisfies Assumption 1. For
log(2/??)
, with probability 1 ? ?, we have
some k0 = O
?2
X
? i? , r ? i ? 2
? i , r? i + ?m.
hT ? ? M
hT ? ? M
(10)
|C|
4
i?C
? i? , we need to turn T0 into a binary vector so that Algorithm 2
Having recovered a good row T0 = M
can output a set; we do so via randomized rounding, obtaining a vector T ? {0, 1}m such that E[T ] =
T0 (where the randomness is with respect to the choices made by the algorithm). Our rounding
procedure is given in Algorithm 4; the following lemma, proved in A.7, asserts its correctness:
Lemma 4. The output T of Algorithm 4 satisfies E[T ] = T0 , kT k0 ? ?m.
Algorithm 4 Randomized rounding algorithm.
1: procedure R ANDOMIZED ROUND(T0 )
. T0 ? [0, 1]m satisfies kT0 k1 ? ?m
2:
Let s be the vector of partial sums of T0
. i.e., sj = (T0 )1 + ? ? ? + (T0 )j
3:
Sample u ? Uniform([0, 1]).
4:
T ? [0, . . . , 0] ? Rm
5:
for z = 0 to ?m ? 1 do
6:
Find j such that u + z ? [sj?1 , sj ), and set Tj = 1. . if no such j exists, skip this step
7:
end for
8:
return T
9: end procedure
The remainder of the proof involves lower-bounding the probability that T is accepted in each stage
of the while loop in Algorithm 2. We refer the reader to Section A.8 for details.
6
Open Directions and Related Work
Future Directions. On the theoretical side, perhaps the most immediate open question is whether it is
possible to improve the dependence of k (the amount of work required per worker) on the parameters
1
?
?, ?, and . It is tempting to hope that when m = n a tight result would have k = O
??2 , in
loose analogy to recent results for the stochastic block model (Abbe and Sandon, 2015b;a; Banks and
Moore, 2016). For stochastic block models, there is conjectured to be a gap between computational
and information-theoretic thresholds, and it would be interesting to see if a similar phenomenon holds
here (the scaling for k given above is based on the conjectured computational threshold).
A second open question concerns the scaling in n: if n m, can we get by with much less work
per rater? Finally, it would be interesting to consider adaptivity: if the choice of queries is based on
previous worker ratings, can we reduce the amount of work?
Related work. Our setting is closely related to the problem of peer prediction (Miller et al., 2005),
in which we wish to obtain truthful information from a population of raters by exploiting inter-rater
agreement. While several mechanisms have been proposed for these tasks, they typically assume that
rater accuracy is observable online (Resnick and Sami, 2007), that the dishonest raters are rational
agents maximizing a payoff function (Dasgupta and Ghosh, 2013; Kamble et al., 2015; Shnayder
et al., 2016), that the raters follow a simple statistical model (Karger et al., 2014; Zhang et al., 2014;
Zhou et al., 2015), or some combination of the above (Shah and Zhou, 2015; Shah et al., 2015).
Ghosh et al. (2011) allow o(n) adversaries to behave arbitrarily but require the rest to be stochastic.
The work closest to ours is Christiano (2014; 2016), which studies online collaborative prediction in
the presence of adversaries; roughly, when raters interact with an item they predict its quality and
afterwards observe the actual quality; the goal is to minimize the number of incorrect predictions
among the honest raters. This differs from our setting in that (i) the raters are trying to learn the item
qualities as part of the task, and (ii) there is no requirement to induce a final global estimate of the
high-quality items, which is necessary for estimating quantiles. It seems possible however that there
are theoretical ties between this setting and ours, which would be interesting to explore.
Acknowledgments. JS was supported by a Fannie & John Hertz Foundation Fellowship, an NSF Graduate
Research Fellowship, and a Future of Life Institute grant. GV was supported by NSF CAREER award CCF1351108, a Sloan Foundation Research Fellowship, and a research grant from the Okawa Foundation. MC was
supported by NSF grants CCF-1565581, CCF-1617577, CCF-1302518 and a Simons Investigator Award.
8
References
E. Abbe and C. Sandon. Community detection in general stochastic block models: fundamental limits and
efficient recovery algorithms. arXiv, 2015a.
E. Abbe and C. Sandon. Detection in the stochastic block model with multiple clusters: proof of the achievability
conjectures, acyclic BP, and the information-computation gap. arXiv, 2015b.
N. Agarwal, A. S. Bandeira, K. Koiliaris, and A. Kolla. Multisection in the stochastic block model using
semidefinite programming. arXiv, 2015.
J. Banks and C. Moore. Information-theoretic thresholds for community detection in sparse networks. arXiv,
2016.
T. T. Cai and X. Li. Robust and computationally feasible community detection in the presence of arbitrary outlier
nodes. The Annals of Statistics, 43(3):1027?1059, 2015.
Y. Chen, S. Sanghavi, and H. Xu. Improved graph clustering. IEEE Transactions on Information Theory, 2014.
P. Chin, A. Rao, and V. Vu. Stochastic block model and community detection in the sparse graphs: A spectral
algorithm with optimal rate of recovery. In Conference on Learning Theory (COLT), 2015.
P. Christiano. Provably manipulation-resistant reputation systems. arXiv, 2014.
P. Christiano. Robust collaborative online learning. arXiv, 2016.
A. Coja-Oghlan. Coloring semirandom graphs optimally. Automata, Languages and Programming, 2004.
A. Coja-Oghlan. Solving NP-hard semirandom graph problems in polynomial expected time. Journal of
Algorithms, 62(1):19?46, 2007.
A. Condon and R. M. Karp. Algorithms for graph partitioning on the planted partition model. Random Structures
and Algorithms, pages 116?140, 2001.
A. Dasgupta and A. Ghosh. Crowdsourced judgement elicitation with endogenous proficiency. In WWW, 2013.
A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?. Asymptotic analysis of the stochastic block model for
modular networks and its algorithmic applications. Physical Review E, 84(6), 2011.
J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database.
In Computer Vision and Pattern Recognition (CVPR), pages 248?255, 2009.
U. Feige and J. Kilian. Heuristics for semirandom graph problems. Journal of Computer and System Sciences,
63(4):639?671, 2001.
U. Feige and R. Krauthgamer. Finding and certifying a large hidden clique in a semirandom graph. Random
Structures and Algorithms, 16(2):195?208, 2000.
A. Ghosh, S. Kale, and P. McAfee. Who moderates the moderators?: crowdsourcing abuse detection in
user-generated content. In 12th ACM conference on Electronic commerce, pages 167?176, 2011.
O. Gu?don and R. Vershynin. Community detection in sparse networks via Grothendieck?s inequality. arXiv,
2014.
A. Harmon. Amazon glitch unmasks war of reviewers. New York Times, 2004.
P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: Some first steps. Social Networks, 1983.
V. Kamble, N. Shah, D. Marn, A. Parekh, and K. Ramachandran. Truth serums for massively crowdsourced
evaluation tasks. arXiv, 2015.
D. R. Karger, S. Oh, and D. Shah. Budget-optimal task allocation for reliable crowdsourcing systems. Operations
Research, 62(1):1?24, 2014.
M. Krivelevich and D. Vilenchik. Semirandom models as benchmarks for coloring algorithms. In Meeting on
Analytic Algorithmics and Combinatorics, pages 211?221, 2006.
C. Kulkarni, P. W. Koh, H. Huy, D. Chia, K. Papadopoulos, J. Cheng, D. Koller, and S. R. Klemmer. Peer and
self assessment in massive online classes. Design Thinking Research, pages 131?168, 2015.
C. M. Le, E. Levina, and R. Vershynin. Concentration and regularization of random graphs. arXiv, 2015.
K. Makarychev, Y. Makarychev, and A. Vijayaraghavan. Approximation algorithms for semi-random partitioning
problems. In Symposium on Theory of Computing (STOC), pages 367?384, 2012.
K. Makarychev, Y. Makarychev, and A. Vijayaraghavan. Learning communities in the presence of errors. arXiv,
2015.
L. Massouli?. Community detection thresholds and the weak Ramanujan property. In STOC, 2014.
D. Mayzlin, Y. Dover, and J. A. Chevalier. Promotional reviews: An empirical investigation of online review
manipulation. Technical report, National Bureau of Economic Research, 2012.
N. Miller, P. Resnick, and R. Zeckhauser. Eliciting informative feedback: The peer-prediction method. Management Science, 51(9):1359?1373, 2005.
A. Moitra, W. Perry, and A. S. Wein. How robust are reconstruction thresholds for community detection? arXiv,
2015.
E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. arXiv, 2012.
E. Mossel, J. Neeman, and A. Sly. Belief propagation, robust reconstruction, and optimal recovery of block
models. arXiv, 2013a.
E. Mossel, J. Neeman, and A. Sly. A proof of the block model threshold conjecture. arXiv, 2013b.
E. Mossel, J. Neeman, and A. Sly. Consistency thresholds for the planted bisection model. In STOC, 2015.
C. Piech, J. Huang, Z. Chen, C. Do, A. Ng, and D. Koller. Tuned models of peer assessment in MOOCs. arXiv,
2013.
R. Priedhorsky, J. Chen, S. T. K. Lam, K. Panciera, L. Terveen, and J. Riedl. Creating, destroying, and restoring
value in Wikipedia. In International ACM Conference on Supporting Group Work, pages 259?268, 2007.
P. Resnick and R. Sami. The influence limiter: provably manipulation-resistant recommender systems. In ACM
Conference on Recommender Systems, pages 25?32, 2007.
N. Shah, D. Zhou, and Y. Peres. Approval voting and incentives in crowdsourcing. In ICML, 2015.
N. B. Shah and D. Zhou. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In
Advances in Neural Information Processing Systems (NIPS), 2015.
V. Shnayder, R. Frongillo, A. Agarwal, and D. C. Parkes. Strong truthfulness in multi-task peer prediction, 2016.
J. Vuurens, A. P. de Vries, and C. Eickhoff. How much spam can you take? An analysis of crowdsourcing results
to increase accuracy. ACM SIGIR Workshop on Crowdsourcing for Information Retrieval, 2011.
Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet EM: A provably optimal algorithm for
crowdsourcing. arXiv, 2014.
D. Zhou, Q. Liu, J. C. Platt, C. Meek, and N. B. Shah. Regularized minimax conditional entropy for crowdsourcing. arXiv, 2015.
9
| 6440 |@word mild:1 version:2 briefly:1 polynomial:1 norm:17 seems:1 judgement:1 open:5 km:6 condon:2 jacob:1 dishonest:3 harder:1 reduction:1 liu:1 score:1 karger:2 neeman:4 bc:5 interestingly:1 ours:2 semirandom:7 imposter:1 tuned:1 recovered:2 ka:2 collude:3 must:3 john:1 partition:1 informative:1 enables:1 wanted:1 gv:1 analytic:1 selected:8 item:63 dover:1 papadopoulos:1 parkes:1 provides:1 node:1 attack:2 zhang:2 proficiency:1 rc:4 symposium:1 incorrect:1 paragraph:1 krzakala:1 pairwise:1 theoretically:1 inter:2 expected:3 roughly:2 themselves:1 nor:1 behavior:3 multi:1 manager:3 grade:1 approval:1 actual:6 increasing:1 spain:1 estimating:2 notation:2 discover:1 suffice:1 what:3 substantially:1 affirmative:1 unobserved:1 finding:1 ghosh:4 ended:1 remember:1 every:1 collecting:2 act:2 voting:1 tie:1 rm:1 platt:1 partitioning:2 grant:3 positive:2 treat:1 decelle:2 yelp:1 limit:3 despite:1 cheat:1 analyzing:4 establishing:1 meet:1 approximately:5 abuse:1 might:3 twice:2 graduate:1 acknowledgment:1 commerce:1 enforces:1 restoring:1 vu:1 practice:1 block:11 union:1 differs:2 procedure:3 manufacture:1 empirical:1 induce:2 refers:1 get:2 close:4 operator:1 influence:9 www:1 customer:1 missing:2 lagrangian:2 maximizing:1 straightforward:2 attention:1 kale:1 independently:1 convex:3 automaton:1 ramanujan:1 mislead:1 amazon:2 formalized:1 identifying:1 recovery:4 assigns:3 sigir:1 rule:1 importantly:1 nuclear:14 fill:2 oh:1 proving:1 population:1 annals:1 imagine:2 suppose:7 massive:2 user:2 programming:3 agreement:3 satisfying:3 particularly:1 recognition:1 curated:1 database:1 observed:1 role:1 resnick:3 rij:1 km0:2 ensures:1 noised:1 kilian:2 region:4 decrease:1 highest:2 removed:1 intuition:4 asked:1 ideally:1 personal:1 raise:1 depend:1 solving:2 tight:1 gu:4 necessitates:1 k0:12 regularizer:1 describe:2 query:7 outside:2 crowd:1 peer:12 firm:1 whose:1 stanford:3 larger:1 modular:1 say:3 cvpr:1 otherwise:4 heuristic:1 statistic:1 think:2 noisy:13 final:1 online:8 cai:2 leinhardt:1 reconstruction:3 lam:1 product:1 remainder:2 relevant:1 loop:1 translate:1 gold:2 asserts:2 normalize:1 recipe:1 exploiting:1 cluster:6 requirement:1 double:1 hbc:4 help:1 completion:3 ac:2 ij:15 inflate:1 strong:2 recovering:5 skip:1 judge:1 implies:3 quantify:1 involves:5 direction:2 closely:1 stochastic:11 human:3 require:2 assign:4 fix:1 suffices:1 investigation:1 proposition:8 clarify:1 hold:7 considered:1 ground:2 great:1 makarychev:7 predict:1 algorithmic:1 major:2 vary:1 purpose:1 limiter:1 largest:1 correctness:1 create:1 tool:2 hope:1 powering:1 rather:3 zhou:6 frongillo:1 karp:2 focus:1 rank:8 indicates:2 check:1 adversarial:4 sense:4 promotional:1 typically:1 initially:1 hidden:1 koller:2 provably:3 issue:1 among:5 colt:1 priori:1 undue:1 constrained:2 fairly:1 once:3 having:1 ng:1 manually:2 identical:3 adversarially:2 represents:2 icml:1 abbe:4 nearly:1 thinking:1 terveen:1 future:2 others:3 sanghavi:1 np:1 report:1 few:3 primarily:1 randomly:8 national:1 rater:16 individual:2 ourselves:3 detection:10 interest:1 highly:1 evaluation:6 certainly:1 truly:1 semidefinite:2 light:1 tj:8 accurate:4 kt:1 chevalier:1 worker:18 necessary:6 partial:1 harmon:3 filled:1 incomplete:1 indexed:1 desired:1 theoretical:2 instance:1 rao:1 measuring:1 localizing:1 strategic:1 cost:4 deviation:1 subset:2 entry:4 reviewer:1 uniform:2 rounding:5 destroying:1 inspires:1 too:3 optimally:1 kn:2 answer:3 gregory:1 considerably:1 vershynin:5 chooses:1 truthfulness:1 density:1 fundamental:1 randomized:3 international:1 dong:1 pool:2 together:1 nm:3 moitra:2 management:1 leveraged:1 possibly:2 choose:1 huang:1 worse:1 creating:1 denoises:1 return:2 li:4 account:1 de:2 student:1 fannie:1 combinatorics:1 sloan:1 ranking:2 depends:1 later:1 view:1 multiplicative:1 endogenous:1 doing:1 analyze:1 red:2 recover:9 curate:1 aggregation:1 crowdsourced:3 denoised:1 shnayder:2 slope:1 simon:1 collaborative:3 ass:1 contribution:1 minimize:1 accuracy:4 who:2 efficiently:1 miller:2 judgment:2 identify:2 yield:1 weak:2 raw:1 artist:1 accurately:2 bisection:1 mc:16 parekh:1 randomness:1 moderator:1 explain:1 whenever:1 definition:2 hack:1 proof:8 mi:3 recovers:3 gain:3 rational:1 dataset:4 proved:3 recall:1 knowledge:4 grapple:1 classroom:2 oghlan:4 formalize:2 actually:1 coloring:2 follow:1 improved:1 though:1 stage:1 sly:4 until:2 ramachandran:1 assessment:2 perry:1 propagation:1 quality:18 perhaps:4 indicated:1 gray:1 laskey:1 effect:2 contain:1 true:5 multiplier:1 ccf:3 hence:3 assigned:2 regularization:1 moore:3 round:3 self:1 linearizing:1 trying:1 chin:2 outline:1 theoretic:2 performs:1 image:1 novel:1 recently:1 wikipedia:2 common:1 permuted:1 physical:1 extend:1 approximates:3 refer:1 credibility:1 consistency:1 similarly:1 language:1 reliability:1 access:1 harsher:2 resistant:2 j:1 closest:1 own:9 recent:1 confounding:1 optimizing:1 optimizes:1 conjectured:2 moderate:1 manipulation:3 massively:1 bandeira:1 inequality:7 binary:2 arbitrarily:5 klemmer:1 life:1 meeting:1 scoring:1 seen:1 care:1 relaxed:1 deng:2 andomized:2 maximize:3 truthful:1 tempting:1 zeckhauser:1 semi:1 christiano:3 full:1 rj:14 afterwards:1 ii:1 multiple:1 technical:5 match:4 levina:1 long:1 chia:1 retrieval:1 curation:2 equally:1 award:2 ensuring:1 prediction:7 imparts:2 essentially:1 expectation:2 vision:1 arxiv:17 represent:2 normalization:1 agarwal:3 curating:1 addition:1 want:2 fellowship:3 else:1 grow:1 singular:2 appropriately:1 rest:1 sure:1 subject:2 vijayaraghavan:2 spirit:1 jordan:1 integer:1 extracting:1 presence:3 ideal:1 constraining:2 intermediate:1 split:1 bernstein:1 hb:9 sami:2 independence:1 mcafee:1 restrict:1 reduce:1 idea:2 okawa:1 knowing:1 economic:1 grading:5 oneself:1 honest:3 t0:15 whether:2 war:1 spammer:2 york:1 cause:1 krivelevich:2 useful:1 clear:2 amount:7 rival:1 generate:3 nsf:3 moses:1 overly:1 per:3 disjoint:1 blue:4 conform:1 discrete:1 dasgupta:2 incentive:4 vuurens:2 group:1 key:7 threshold:7 changing:1 ht:7 graph:12 concreteness:1 fraction:6 sum:5 merely:1 run:3 you:1 massouli:2 place:1 almost:2 throughout:1 reader:1 electronic:1 eickhoff:1 draw:2 wein:1 scaling:3 submatrix:1 bit:1 bound:5 piech:2 meek:1 datum:1 distinguish:1 cheng:1 constraint:20 fei:2 bp:1 ri:1 collusion:2 certifying:1 aspect:1 bkop:2 argument:2 separable:2 conjecture:2 charikar:1 according:7 combination:1 riedl:1 hertz:1 feige:4 across:4 smaller:1 em:1 modification:1 outlier:1 koh:1 computationally:1 agree:3 previously:1 remains:1 turn:1 loose:2 fail:1 mechanism:2 flip:1 end:4 operation:1 observe:4 hierarchical:1 appropriate:1 spectral:2 coin:1 robustness:4 shah:7 bureau:1 top:4 remaining:8 ensure:1 include:1 clustering:7 krauthgamer:2 denotes:2 hinge:1 xc:2 unsuitable:1 exploit:3 lenient:1 quantile:15 k1:1 establish:3 approximating:1 eliciting:1 objective:3 question:7 already:1 concentration:6 dependence:2 planted:2 surrogate:2 zdeborov:1 incentivized:1 majority:1 ourself:2 index:3 glitch:1 illustration:2 potentially:1 stoc:3 negative:1 stated:1 design:1 reliably:1 unknown:3 coja:4 allowing:2 recommender:2 observation:1 datasets:1 markov:1 benchmark:1 behave:5 supporting:1 immediate:1 payoff:1 peres:1 interacting:1 perturbation:2 rn:1 arbitrary:4 community:9 rating:64 pair:1 required:3 sandon:4 imagenet:1 algorithmics:1 established:1 barcelona:1 nip:2 adversary:16 elicitation:1 below:1 pattern:1 sparsity:1 challenge:4 program:2 reliable:30 including:1 everyone:1 belief:1 difficulty:1 rely:1 treated:1 regularized:1 indicator:1 representing:1 minimax:1 improve:1 rated:1 mossel:6 imply:2 picture:1 hm:1 grothendieck:1 review:7 literature:1 curator:1 checking:1 relative:2 asymptotic:1 fully:1 permutation:1 adaptivity:1 generation:3 interesting:4 allocation:1 querying:1 analogy:1 ingredient:1 acyclic:1 foundation:3 degree:2 agent:1 sufficient:1 raters:41 bank:2 uncorrelated:1 row:23 course:1 summary:1 achievability:1 surprisingly:2 supported:3 zc:5 side:1 weaker:2 allow:3 institute:1 face:1 sparse:5 tolerance:2 feedback:1 evaluating:1 author:1 made:1 spam:1 far:3 social:1 log3:1 transaction:1 sj:3 approximate:2 observable:1 unreliable:8 monotonicity:3 clique:1 global:3 don:5 reputation:1 why:1 quenches:1 learn:1 robust:5 career:1 vilenchik:2 symmetry:1 obtaining:4 interact:1 necessarily:1 submit:2 did:2 blockmodels:1 main:5 whole:1 noise:5 arise:2 bounding:3 huy:1 nothing:1 xu:1 site:1 quantiles:4 downwards:1 formalization:1 wish:3 lie:3 breaking:1 third:1 theorem:4 kop:1 bad:1 specific:1 showing:3 offset:2 normalizing:1 closeness:4 exists:3 concern:1 socher:1 workshop:1 valiant:1 budget:1 vries:1 chen:6 gap:2 entropy:1 logarithmic:1 likely:1 explore:1 steinhardt:1 prevents:2 lagrange:1 holland:2 monotonic:6 mij:9 corresponds:1 truth:3 satisfies:4 acm:4 kzkf:1 conditional:1 goal:6 careful:1 replace:1 content:2 feasible:3 hard:1 typical:2 except:2 determined:1 corrected:1 specifically:1 lemma:14 called:1 duality:2 accepted:1 serum:1 select:2 formally:2 people:1 latter:1 avoiding:1 kulkarni:2 investigator:1 evaluate:6 phenomenon:1 crowdsourcing:13 |
6,015 | 6,441 | Direct Feedback Alignment Provides Learning in
Deep Neural Networks
Arild N?kland
Trondheim, Norway
arild.nokland@gmail.com
Abstract
Artificial neural networks are most commonly trained with the back-propagation
algorithm, where the gradient for learning is provided by back-propagating the error,
layer by layer, from the output layer to the hidden layers. A recently discovered
method called feedback-alignment shows that the weights used for propagating the
error backward don?t have to be symmetric with the weights used for propagation
the activation forward. In fact, random feedback weights work evenly well, because
the network learns how to make the feedback useful. In this work, the feedback
alignment principle is used for training hidden layers more independently from
the rest of the network, and from a zero initial condition. The error is propagated
through fixed random feedback connections directly from the output layer to each
hidden layer. This simple method is able to achieve zero training error even in
convolutional networks and very deep networks, completely without error backpropagation. The method is a step towards biologically plausible machine learning
because the error signal is almost local, and no symmetric or reciprocal weights
are required. Experiments show that the test performance on MNIST and CIFAR
is almost as good as those obtained with back-propagation for fully connected
networks. If combined with dropout, the method achieves 1.45% error on the
permutation invariant MNIST task.
1
Introduction
For supervised learning, the back-propagation algorithm (BP), see [2], has achieved great success in
training deep neural networks. As today, this method has few real competitors due to its simplicity
and proven performance, although some alternatives do exist.
Boltzmann machine learning in different variants are biologically inspired methods for training neural
networks, see [6], [10] and [5]. The methods use only local available signals for adjusting the weights.
These methods can be combined with BP fine-tuning to obtain good discriminative performance.
Contrastive Hebbian Learning (CHL), is similar to Boltzmann Machine learning, but can be used
in deterministic feed-forward networks. In the case of weak symmetric feedback-connections it
resembles BP [16].
Recently, target-propagation (TP) was introduced as an biologically plausible training method, where
each layer is trained to reconstruct the layer below [7]. This method does not require symmetric
weights and propagates target values instead of gradients backward.
A novel training principle called feedback-alignment (FA) was recently introduced [9]. The authors
show that the feedback weights used to back-propagate the gradient do not have to be symmetric with
the feed-forward weights. The network learns how to use fixed random feedback weights in order to
reduce the error. Essentially, the network learns how to learn, and that is a really puzzling result.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Back-propagation with asymmetric weights was also explored in [8]. One of the conclusions from
this work is that the weight symmetry constraint can be significantly relaxed while still retaining
strong performance.
The back-propagation algorithm is not biologically plausible for several reasons. First, it requires
symmetric weights. Second, it requires separate phases for inference and learning. Third, the learning
signals are not local, but have to be propagated backward, layer-by-layer, from the output units. This
requires that the error derivative has to be transported as a second signal through the network. To
transport this signal, the derivative of the non-linearities have to be known.
All mentioned methods require the error to travel backward through reciprocal connections. This is
biologically plausible in the sense that cortical areas are known to be reciprocally connected [3]. The
question is how an error signal is relayed through an area to reach more distant areas. For BP and FA
the error signal is represented as a second signal in the neurons participating in the forward pass. For
TP the error is represented as a change in the activation in the same neurons. Consider the possibility
that the error in the relay layer is represented by neurons not participating in the forward pass. For
lower layers, this implies that the feedback path becomes disconnected from the forward path, and
the layer is no longer reciprocally connected to the layer above.
The question arise whether a neuron can receive a teaching signal also through disconnected feedback
paths. This work shows experimentally that directly connected feedback paths from the output layer
to neurons earlier in the pathway is sufficient to enable error-driven learning in a deep network. The
requirements are that the feedback is random and the whole network is adapted. The concept is
quite different from back-propagation, but the result is very similar. Both methods seem to produce
features that makes classification easier for the layers above.
Figure 1c) and d) show the novel feedback path configurations that is further explored in this work.
The methods are based on the feedback alignment principle and is named "direct feedback-alignment"
(DFA) and "indirect feedback-alignment" (IFA).
Figure 1: Overview of different error transportation configurations. Grey arrows indicate activation
paths and black arrows indicate error paths. Weights that are adapted during learning are denoted as
Wi , and weights that are fixed and random are denoted as Bi . a) Back-propagation. b) Feedbackalignment. c) Direct feedback-alignment. d) Indirect feedback-alignment.
2
Method
Let (x, y) be mini-batches of input-output vectors that we want the network to learn. For simplicity,
assume that the network has only two hidden layers as in Figure 1, and that the target output y is
scaled between 0 and 1. Let the rows in Wi denote the weights connecting the layer below to a
unit in hidden layer i, and let bi be a column vector with biases for the units in hidden layer i. The
activations in the network are then calculated as
a1 = W1 x + b1 , h1 = f (a1 )
(1)
a2 = W2 h1 + b2 , h2 = f (a2 )
(2)
2
ay = W3 h2 + b3 , y? = fy (ay )
(3)
where f () is the non-linearity used in hidden layers and fy () the non-linearity used in the output
layer. If we choose a logistic activation function in the output layer and a binary cross-entropy loss
function, the loss for a mini-batch with size N and the gradient at the output layer e are calculated as
1 X
J =?
ymn log y?mn + (1 ? ymn ) log(1 ? y?mn )
(4)
N m,n
?J
= y? ? y
(5)
?ay
where m and n are output unit and mini-batch indexes. For the BP, the gradients for hidden layers are
calculated as
?J
?J
?a2 =
= (W3T e) f 0 (a2 ), ?a1 =
= (W2T ?a2 ) f 0 (a1 )
(6)
?a2
?a1
e = ?ay =
where is an element-wise multiplication operator and f 0 () is the derivative of the non-linearity.
This gradient is also called steepest descent, because it directly minimizes the loss function given the
linearized version of the network. For FA, the hidden layer update directions are calculated as
?a2 = (B2 e) f 0 (a2 ), ?a1 = (B1 ?a2 ) f 0 (a1 )
(7)
where Bi is a fixed random weight matrix with appropriate dimension. For DFA, the hidden layer
update directions are calculated as
?a2 = (B2 e) f 0 (a2 ), ?a1 = (B1 e) f 0 (a1 )
(8)
where Bi is a fixed random weight matrix with appropriate dimension. If all hidden layers have the
same number of neurons, Bi can be chosen identical for all hidden layers. For IFA, the hidden layer
update directions are calculated as
?a2 = (W2 ?a1 ) f 0 (a2 ), ?a1 = (B1 e) f 0 (a1 )
(9)
where B1 is a fixed random weight matrix with appropriate dimension. Ignoring the learning rate, the
weight updates for all methods are calculated as
?W1 = ??a1 xT , ?W2 = ??a2 hT1 , ?W3 = ?ehT2
3
(10)
Theoretical results
BP provides a gradient that points in the direction of steepest descent in the loss function landscape.
FA provides a different update direction, but experimental results indicate that the method is able
to reduce the error to zero in networks with non-linear hidden units. This is surprising because the
principle is distinct different from steepest descent. For BP, the feedback weights are the transpose of
the forward weights. For FA the feedback weights are fixed, but if the forward weights are adapted,
they will approximately align with the pseudoinverse of the feedback weights in order to make the
feedback useful [9].
The feedback-alignment paper [9] proves that fixed random feedback asymptotically reduces the
error to zero. The conditions for this to happen are freely restated in the following. 1) The network is
linear with one hidden layer. 2) The input data have zero mean and standard deviation one. 3) The
feedback matrix B satisfies B + B = I where B + is the Moore-Penrose pseudo-inverse of B. 4) The
forward weights are initialized to zero. 5) The output layer weights are adapted to minimize the error.
Let?s call this novel principle the feedback alignment principle.
It is not clear how the feedback alignment principle can be applied to a network with several nonlinear hidden layers. The experiments in [9] show that more layers can be added if the error is
back-propagated layer-by-layer from the output.
The following theorem points at a mechanism that can explain the feedback alignment principle.
The mechanism explains how an asymmetric feedback path can provide learning by aligning the
back-propagated and forward propagated gradients with it?s own, under the assumption of constant
update directions for each data point.
3
Theorem 1. Given 2 hidden layers k and k + 1 in a feed-forward neural network where k connects
to k + 1. Let hk and hk+1 be the hidden layer activations. Let the functional dependency between the
layers be hk+1 = f (ak+1 ), where ak+1 = W hk + b. Here W is a weight matrix, b is a bias vector
and f () is a non-linearity. Let the layers be updated according to the non-zero update directions
?hk+1
?hk
?hk and ?hk+1 where k?h
and k?h
are constant for each data point. The negative update
kk
k+1 k
directions will minimize the following layer-wise criterion
K = Kk + Kk+1 =
?hT hk+1
?hTk hk
+ k+1
k?hk k
k?hk+1 k
(11)
Minimizing K will maximize the gradient maximizing the alignment criterion
L = Lk + Lk+1 =
where
?hT ck+1
?hTk ck
+ k+1
k?hk k
k?hk+1 k
(12)
?hk+1
?hk+1 = W T (?hk+1 f 0 (ak+1 ))
?hk
?hk+1
?hk = (W ?hk ) f 0 (ak+1 )
ck+1 =
?hTk
ck =
(13)
(14)
If Lk > 0, then is ??hk a descending direction in order to minimize Kk+1 .
Proof. Let i be the any of the layers k or k + 1. The prescribed update ??hi is the steepest descent
direction in order to minimize Ki because by using the product rule and the fact that any partial
?hi
derivative of k?h
is zero we get
ik
?hi
?Ki
? ?hTi hi
?
?hi
?hi ?hi
= ?0hi ?
= ??i ?hi (15)
?
=?
=?
hi ?
?hi
?hi k?hi k
?hi k?hi k
?hi k?hi k
k?hi k
i
Here ?i = k?h1 i k is a positive scalar because ?hi is non-zero. Let ?ai be defined as ?ai = ?h
?ai ?hi =
0
?hi f (ai ) where ai is the input to layer i. Using the product rule again, the gradients maximizing
Lk and Lk+1 are
?Li
? ?hTi ci
?hi
?
?hi
?ci ?hi
=
= 0ci +
= ?i ?hi
(16)
=
ci +
?ci
?ci k?hi k
?ci k?hi k
?ci k?hi k
k?hi k
?Lk+1 ?ck+1
?Lk+1
=
= ?k+1 (?hk+1 f 0 (ak+1 ))?hTk = ?k+1 ?ak+1 ?hTk
?W
?ck+1 ?W
?Lk
?ck ?Lk
=
= (?hk+1 f 0 (ak+1 ))?k ?hTk = ?k ?ak+1 ?hTk
?W
?W T ?cTk
Ignoring the magnitude of the gradients we have
?L
?W
=
?Lk
?W
=
?Lk+1
?W .
(17)
(18)
If we project hi onto ?hi we
can write hi =
hT
i ?hi
k?hi k2 ?hi
?W = ??hk+1
?hk+1
= ?(?hk+1 f 0 (ak+1 ))hTk = ??ak+1 hTk = ??ak+1 (?k Kk ?hk +hk,res )T =
?W
+ hi,res = ?i Ki ?hi + hi,res . For W , the prescribed update is
?Lk
? ?ak+1 hTk,res
(19)
?W
k
We can indirectly maximize Lk and Lk+1 by maximizing the component of ?L
?W in ?W by minimizing
Kk . The gradient to minimize Kk is the prescribed update ??hk .
??k Kk ?ak+1 ?hTk ? ?ak+1 hTk,res = ?Kk
Lk > 0 implies that the angle ? between ?hk and the back-propagated gradient ck is within 90? of
cT ?h
each other because cos(?) = kckkkk?hkk k = kcLkkk > 0 ? |?| < 90? . Lk > 0 also implies that ck is
non-zero and thus descending. Then ?hk will point in a descending direction because a vector within
90? of the steepest descending direction will also point in a descending direction.
4
It is important to note that the theorem doesn?t tell that the training will converge or reduce any error
to zero, but if the fake gradient is successful in reducing K, then will this gradient also include a
growing component that tries to increase the alignment criterion L.
The theorem can be applied to the output layer and the last hidden layer in a neural network. To
achieve error-driven learning, we have to close the feedback loop. Then we get the update directions
?J
?hk+1 = ?a
= e and ?hk = Gk (e) where Gk (e) is a feedback path connecting the output to the
y
hidden layer. The prescribed update will directly minimize the loss J given hk . If Lk turns positive,
the feedback will provide a update direction ?hk = Gk (e) that reduces the same loss. The theorem
can be applied successively to deeper layers. For each layer i, the weight matrix Wi is updated to
minimize Ki+1 in the layer above, and at the same time indirectly make it?s own update direction
?hi = Gi (e) useful.
Theorem 1 suggests that a large class of asymmetric feedback paths can provide a descending gradient
direction for a hidden layer, as long as on average Li > 0. Choosing feedback paths Gi (e), visiting
every layer on it?s way backward, with weights fixed and random, gives us the FA method. Choosing
direct feedback paths Gi (e) = Bi e, with Bi fixed and random, gives us the DFA method. Choosing
a direct feedback path G1 (e) = B1 e connecting to the first hidden layer, and then visiting every
layer on it?s way forward, gives us the IFA method. The experimental section shows that learning is
possible even with indirect feedback like this.
Direct random feedback ?hi = Gi (e) = Bi e has the advantage that ?hi is non-zero for all non-zero e.
This is because a random matrix Bi will have full rank with a probability very close to 1. A non-zero
?hi is a requirement in order to achieve Li > 0. Keeping the feedback static will ensure that this
property is preserved during training. In addition, a static feedback can make it easier to maximize Li
because the direction of ?hi is more constant. If the cross-entropy loss is used, and the output target
values are 0 or 1, then the sign of the error ej for a given sample j will not change. This means that
the quantity Bi sign(ej ) will be constant during training because both Bi and sign(ej ) are constant.
If the task is to classify, the quantity will in addition be constant for all samples within a class. Direct
random feedback will also provide a update direction ?hi with a magnitude that only varies with the
magnitude of the error e.
If the forward weights are initialized to zero, then will Li = 0 because the back-propagated error is
zero. This seems like a good starting point when using asymmetric feedback because the first update
steps have the possibility to quickly turn this quantity positive. A zero initial condition is however not
a requirement for asymmetric feedback to work. One of the experiments will show that even when
starting from a bad initial condition, direct random and static feedback is able to turn this quantity
positive and reduce the training error to zero.
For FA and BP, the hidden layer growth is bounded by the layers above. If the layers above saturate,
the hidden layer update ?hi becomes zero. For DFA, the hidden layer update ?hi will be non-zero as
long as the error e is non-zero. To limit the growth, a squashing non-linearity like hyperbolic tangent
or logistic sigmoid seems appropriate. If we add a tanh non-linearity to the hidden layer, the hidden
activation is bounded within [?1, 1]. With zero initial weights, hi will be zero for all data points. The
tanh non-linearity will not limit the initial growth in any direction. The experimental results indicate
that this non-linearity is well suited together with DFA.
If the hyperbolic tangent non-linearity is used in the hidden layer, the forward weights can be
initialized to zero. The rectified linear activation function (ReLU) will not work with zero initial
weights because the error derivative for such a unit is zero when the bias and incoming weights are
all zero.
4
Experimental results
To investigate if DFA learns useful features in the hidden layers, a 3x400 tanh network was trained
on MNIST with both BP and DFA. The input test images and resulting features were visualized using
t-SNE [15], see Figure 3. Both methods learns features that makes it easier to discriminate between
the classes. At the third hidden layer, the clusters are well separated, except for some stray points.
The visible improvement in separation from input to first hidden layer indicates that error DFA is
able to learn useful features also in deeper hidden layers.
5
Figure 2: Left: Error curves for a network pre-trained with a frozen first hidden layer. Right: Error
curves for normal training of a 2x800 tanh network on MNIST.
Figure 3: t-SNE visualization of MNIST input and features. Different colors correspond to different
classes. The top row shows features obtained with BP, the bottom row shows features obtained with
DFA. From left to right: input images, first hidden layer features, second hidden layer features and
third hidden layer features.
Furthermore, another experiment was performed to see if error DFA is able to learn useful hidden
representations in deeper layers. A 3x50 tanh network was trained on MNIST. The first hidden layer
was fixed to random weights, but the 2 hidden layers above were trained with BP for 50 epochs. At
this point, the training error was about 5%. Then, the first hidden layer was unfreezed and training
continued with BP. The training error decreased to 0% in about 50 epochs. The last step was repeated,
but this time the unfreezed layer was trained with DFA. As expected because of different update
directions, the error first increased, then decreased to 0% after about 50 epochs. The error curves are
presented in Figure2(Left). Even though the update direction provided by DFA is different from the
back-propagated gradient, the resulting hidden representation reduces the error in a similar way.
Several feed-forward networks were trained on MNIST and CIFAR to compare the performance
of DFA with FA and BP. The experiments were performed with the binary cross-entropy loss and
optimized with RMSprop [14]. For the MNIST dropout experiments, learning rate with decay and
training time was chosen based on a validation set. For all other experiments, the learning rate was
roughly optimized for BP and then used for all methods. The learning rate was constant for each
dataset. Training was stopped when training error reached 0.01% or the number of epochs reached
300. A mini-batch size of 64 was used. No momentum or weight decay was used. The input data
was scaled to be between 0 and 1, but for the convolutional networks, the data was whitened. For
FA and DFA, the weights and biases were initialized to zero, except for the ReLU networks. For BP
and/or ReLU, the initial weights and biases were sampled from a uniform distribution in the range
6
?
?
[?1/ f anin, 1/?f anin]. The?random feedback weights were sampled from a uniform distribution
in the range [?1/ f anout, 1/ f anout].
MODEL
BP
FA
DFA
7x240 Tanh
100x240 Tanh
1x800 Tanh
2x800 Tanh
3x800 Tanh
4x800 Tanh
2x800 Logistic
2x800 ReLU
2x800 Tanh + DO
2x800 Tanh + ADV
2.16 ? 0.13%
2.20 ? 0.13% (0.02%)
1.59 ? 0.04%
1.60 ? 0.06%
1.75 ? 0.05%
1.92 ? 0.11%
1.67 ? 0.03%
1.48 ? 0.06%
1.26 ? 0.03% (0.18%)
1.01 ? 0.08%
1.68 ? 0.05%
1.64 ? 0.03%
1.66 ? 0.09%
1.70 ? 0.04%
1.82 ? 0.10%
1.74 ? 0.10%
1.53 ? 0.03% (0.18%)
1.14 ? 0.03%
2.32 ? 0.15% (0.03%)
3.92 ? 0.09% (0.12%)
1.68 ? 0.05%
1.74 ? 0.08%
1.70 ? 0.04%
1.83 ? 0.07% (0.02%)
1.75 ? 0.04%
1.70 ? 0.06%
1.45 ? 0.07% (0.24%)
1.02 ? 0.05% (0.12%)
Table 1: MNIST test error for back-propagation (BP), feedback-alignment (FA) and direct feedbackalignment (DFA). Training error in brackets when higher than 0.01%. Empty fields indicate no
convergence.
The results on MNIST are summarized in Table 1. For adversarial regularization (ADV), the
networks were trained on adversarial examples generated by the "fast-sign-method" [4]. For dropout
regularization (DO) [12], a dropout probability of 0.1 was used in the input layer and 0.5 elsewhere.
For the 7x240 network, target propagation achieved an error of 1.94% [7]. The results for all
three methods are very similar. Only DFA was able to train the deepest network with the simple
initialization used. The best result for DFA matches the best result for BP.
MODEL
BP
FA
DFA
1x1000 Tanh
3x1000 Tanh
3x1000 Tanh + DO
CONV Tanh
45.1 ? 0.7% (2.5%)
45.1 ? 0.3% (0.2%)
42.2 ? 0.2% (36.7%)
22.5 ? 0.4%
46.4 ? 0.4% (3.2%)
47.0 ? 2.2% (0.3%)
46.9 ? 0.3% (48.9%)
27.1 ? 0.8% (0.9%)
46.4 ? 0.4% (3.2%)
47.4 ? 0.8% (2.3%)
42.9 ? 0.2% (37.6%)
26.9 ? 0.5% (0.2%)
Table 2: CIFAR-10 test error for back-propagation (BP), feedback-alignment (FA) and direct feedbackalignment (DFA). Training error in brackets when higher than 0.1%.
The results on CIFAR-10 are summarized in Table 2. For the convolutional network the error was
injected after the max-pooling layers. The model was identical to the one used in the dropout paper
[12], except for the non-linearity. For the 3x1000 network, target propagation achieved an error of
49.29% [7]. For the dropout experiment, the gap between BP and DFA is only 0.7%. FA does not
seem to improve with dropout. For the convolutional network, DFA and FA are worse than BP.
MODEL
BP
FA
DFA
1x1000 Tanh
3x1000 Tanh
3x1000 Tanh + DO
CONV Tanh
71.7 ? 0.2% (38.7%)
72.0 ? 0.3% (0.2%)
69.8 ? 0.1% (66.8%)
51.7 ? 0.2%
73.8 ? 0.3% (37.5%)
75.3 ? 0.1% (0.5%)
75.3 ? 0.2% (77.2%)
60.5 ? 0.3%
73.8 ? 0.3% (37.5%)
75.9 ? 0.2% (3.1%)
73.1 ? 0.1% (69.8%)
59.0 ? 0.3%
Table 3: CIFAR-100 test error for back-propagation (BP), feedback-alignment (FA) and direct
feedback-alignment (DFA). Training error in brackets when higher than 0.1%.
The results on CIFAR-100 are summarized in Table 3. DFA improves with dropout, while FA does
not. For the convolutional network, DFA and FA are worse than BP.
The above experiments were performed to verify the DFA method. The feedback loops are the
shortest possible, but other loops can also provide learning. An experiment was performed on MNIST
7
to see if a single feedback loop like in Figure 1d), was able to train a deep network with 4 hidden
layers of 100 neurons each. The feedback was connected to the first hidden layer, and all hidden
layers above were trained with the update direction forward-propagated through this loop. Starting
from a random initialization, the training error reduced to 0%, and the test error reduced to 3.9%.
5
Discussion
The experimental results indicate that DFA is able to fit the training data equally good as BP and FA.
The performance on the test set is similar to FA but lagging a little behind BP. For the convolutional
network, BP is clearly the best performer. Adding regularization seems to help more for DFA than
for FA.
Only DFA was successful in training a network with 100 hidden layers. If proper weight initialization
is used, BP is able to train very deep networks as well [13][11]. The reason why BP fails to converge
is probably the very simple initialization scheme used here. Proper initialization might help FA in a
similar way, but this was not investigated any further.
The DFA training procedure has a lot in common with supervised layer-wise pre-training of a deep
network, but with an important difference. If all layers are trained simultaneously, it is the error at the
top of a deep network that drives the learning, not the error in a shallow pre-training network.
If the network above a target hidden layer is not adapted, FA and DFA will not give an improvement
in the loss. This is in contrast to BP that is able to decrease the error even in this case because the
feedback depends on the weights and layers above.
DFA demonstrates a novel application of the feedback alignment principle. The brain may or may not
implement this kind of feedback, but it is a step towards better better understanding mechanisms that
can provide error-driven learning in the brain. DFA shows that learning is possible in feedback loops
where the forward and feedback paths are disconnected. This introduces a large flexibility in how the
error signal might be transmitted. A neuron might receive it?s error signals via a post-synaptic neuron
(BP,CHL), via a reciprocally connected neuron (FA,TP), directly from a pre-synaptic neuron (DFA),
or indirectly from an error source located several synapses away earlier in the informational pathway
(IFA).
Disconnected feedback paths can lead to more biologically plausible machine learning. If the feedback
signal is added to the hidden layers before the non-linearity, the derivative of the non-linearity does
not have to be known. The learning rule becomes local because the weight update only depends on
the pre-synaptic activity and the temporal derivative of the post-synaptic activity. Learning is not a
separate phase, but performed at the end of an extended forward pass. The error signal is not a second
signal in the neurons participating in the forward pass, but a separate signal relayed by other neurons.
The local update rule can be linked to Spike-Timing-Dependent Plasticity (STDP) believed to govern
synaptic weight updates in the brain, see [1].
Disconnected feedback paths have great similarities with controllers used in dynamical control loops.
The purpose of the feedback is to provide a change in the state that reduces the output error. For a
dynamical control loop, the change is added to the state and propagated forward to the output. For a
neural network, the change is used to update the weights.
6
Conclusion
A biologically plausible training method based on the feedback alignment principle is presented for
training neural networks with error feedback rather than error back-propagation. In this method,
neither symmetric weights nor reciprocal connections are required. The error paths are short and
enables training of very deep networks. The training signals are local or available at most one synapse
away. No weight initialization is required.
The method was able to fit the training set on all experiments performed on MNIST, Cifar-10 and
Cifar-100. The performance on the test sets lags a little behind back-propagation.
Most importantly, this work suggests that the restriction enforced by back-propagation and feedbackalignment, that the backward pass have to visit every neuron from the forward pass, can be discarded.
Learning is possible even when the feedback path is disconnected from the forward path.
8
References
[1] Yoshua Bengio, Dong-Hyun Lee, J?rg Bornschein, Thomas Mesnard, and Zhouhan Lin. Towards
biologically plausible deep learning. CoRR, abs/1502.04156, 2015.
[2] R. J. Williams D. E. Rumelhart, G. E. Hinton. Learning internal representations by error
propagation. Nature, 323:533?536, 1986.
[3] Charles D Gilbert and Wu Li. Top-down influences on visual processing. Nature Reviews
Neuroscience, 14(5):350?363, 2013.
[4] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014.
[5] Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep
belief nets. Neural Computation, 18(7):1527?1554, 2006.
[6] Geoffrey E. Hinton and Terrence J. Sejnowski. Optimal Perceptual Inference. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, 1983.
[7] Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, and Yoshua Bengio. Difference target propagation. In ECML/PKDD (1), Machine Learning and Knowledge Discovery in Databases, pages
498?515. Springer International Publishing, 2015.
[8] Qianli Liao, Joel Z. Leibo, and Tomaso A. Poggio. How important is weight symmetry in
backpropagation? CoRR, abs/1510.05067, 2015.
[9] Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, and Colin J. Akerman. Random
feedback weights support learning in deep neural networks. CoRR, abs/1411.0247, 2014.
[10] Ruslan Salakhutdinov and Geoffrey E. Hinton. Deep boltzmann machines. In Proceedings of
the Twelfth International Conference on Artificial Intelligence and Statistics, AISTATS 2009,
volume 5 of JMLR Proceedings, pages 448?455. JMLR.org, 2009.
[11] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear
dynamics of learning in deep linear neural networks. CoRR, abs/1312.6120, 2013.
[12] Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine
Learning Research, 15(1):1929?1958, 2014.
[13] David Sussillo. Random walks: Training very deep nonlinear feed-forward networks with smart
initialization. CoRR, abs/1412.6558, 2014.
[14] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural Networks for Machine Learning 4, 2012.
[15] L.J.P. van der Maaten and G.E. Hinton. Visualizing high-dimensional data using t-sne. Journal
of Machine Learning Research, 9:2579?2605, 2008.
[16] Xiaohui Xie and H. Sebastian Seung. Equivalence of backpropagation and contrastive hebbian
learning in a layered network. Neural Computation, 15(2):441?454, 2003.
9
| 6441 |@word version:1 seems:3 twelfth:1 grey:1 propagate:1 linearized:1 contrastive:2 initial:7 configuration:2 daniel:1 com:1 surprising:1 activation:8 gmail:1 visible:1 distant:1 happen:1 plasticity:1 enables:1 x240:3 christian:1 update:26 intelligence:1 reciprocal:3 steepest:5 short:1 provides:3 relayed:2 org:1 zhang:1 direct:11 ik:1 pathway:2 lagging:1 expected:1 tomaso:1 pkdd:1 nor:1 growing:1 roughly:1 brain:3 inspired:1 informational:1 salakhutdinov:2 little:2 becomes:3 provided:2 spain:1 linearity:13 project:1 bounded:2 conv:2 kind:1 minimizes:1 pseudo:1 temporal:1 every:3 growth:3 scaled:2 k2:1 demonstrates:1 control:2 unit:6 positive:4 before:1 local:6 timing:1 limit:2 ak:14 path:19 approximately:1 black:1 might:3 initialization:7 resembles:1 equivalence:1 suggests:2 co:1 bi:11 range:2 implement:1 backpropagation:3 procedure:1 area:3 significantly:1 hyperbolic:2 pre:5 get:2 onto:1 close:2 layered:1 operator:1 influence:1 yee:1 descending:6 restriction:1 gilbert:1 deterministic:1 xiaohui:1 transportation:1 maximizing:3 williams:1 starting:3 independently:1 restated:1 simplicity:2 rule:4 continued:1 importantly:1 shlens:1 updated:2 target:8 today:1 exact:1 goodfellow:1 element:1 rumelhart:1 recognition:1 located:1 asymmetric:5 database:1 bottom:1 connected:6 adv:2 coursera:1 decrease:1 mentioned:1 govern:1 rmsprop:2 seung:1 dynamic:1 trained:11 x400:1 smart:1 completely:1 indirect:3 represented:3 train:3 separated:1 distinct:1 fast:2 sejnowski:1 artificial:2 tell:1 choosing:3 harnessing:1 quite:1 lag:1 saizheng:1 plausible:7 reconstruct:1 statistic:1 gi:4 g1:1 fischer:1 advantage:1 frozen:1 net:1 bornschein:1 product:2 loop:8 flexibility:1 achieve:3 participating:3 sutskever:1 convergence:1 chl:2 requirement:3 cluster:1 empty:1 produce:1 help:2 sussillo:1 andrew:1 propagating:2 strong:1 implies:3 indicate:6 direction:23 jonathon:1 enable:1 saxe:1 explains:1 require:2 really:1 normal:1 stdp:1 great:2 achieves:1 a2:14 relay:1 purpose:1 ruslan:2 travel:1 tanh:21 clearly:1 ck:9 rather:1 ej:3 improvement:2 rank:1 indicates:1 hk:36 contrast:1 adversarial:3 sense:1 inference:2 ganguli:1 dependent:1 hidden:47 classification:1 denoted:2 retaining:1 field:1 identical:2 yoshua:2 few:1 simultaneously:1 phase:2 connects:1 ab:6 possibility:2 investigate:1 joel:1 alignment:21 introduces:1 bracket:3 behind:2 partial:1 poggio:1 divide:1 initialized:4 re:5 walk:1 theoretical:1 stopped:1 increased:1 column:1 earlier:2 ctk:1 classify:1 tp:3 deviation:1 uniform:2 krizhevsky:1 successful:2 osindero:1 dependency:1 varies:1 combined:2 international:2 lee:2 dong:2 zhouhan:1 terrence:1 connecting:3 quickly:1 together:1 ilya:1 w1:2 again:1 successively:1 x1000:7 choose:1 worse:2 derivative:7 li:6 szegedy:1 hkk:1 b2:3 summarized:3 depends:2 performed:6 h1:3 try:1 lot:1 linked:1 reached:2 simon:1 minimize:7 convolutional:6 correspond:1 landscape:1 weak:1 rectified:1 drive:1 explain:1 synapsis:1 reach:1 sebastian:1 synaptic:5 tweed:1 competitor:1 james:1 proof:1 static:3 propagated:10 sampled:2 dataset:1 adjusting:1 color:1 knowledge:1 improves:1 back:20 feed:5 norway:1 htk:12 supervised:2 higher:3 xie:1 synapse:1 though:1 furthermore:1 transport:1 nonlinear:3 propagation:19 logistic:3 b3:1 lillicrap:1 concept:1 verify:1 regularization:3 symmetric:7 moore:1 visualizing:1 during:3 criterion:3 whye:1 ymn:2 ay:4 image:2 wise:3 novel:4 recently:3 charles:1 sigmoid:1 common:1 functional:1 overview:1 volume:1 ai:5 tuning:1 teaching:1 longer:1 similarity:1 align:1 aligning:1 add:1 own:2 recent:1 driven:3 binary:2 success:1 ht1:1 der:1 transmitted:1 relaxed:1 performer:1 freely:1 converge:2 maximize:3 shortest:1 colin:1 signal:16 full:1 reduces:4 hebbian:2 match:1 cross:3 long:2 cifar:8 believed:1 lin:1 post:2 equally:1 visit:1 a1:13 variant:1 whitened:1 essentially:1 controller:1 vision:1 liao:1 achieved:3 receive:2 preserved:1 want:1 fine:1 addition:2 decreased:2 source:1 w2:3 rest:1 probably:1 pooling:1 seem:2 call:1 bengio:2 relu:4 fit:2 w3:2 reduce:4 figure2:1 whether:1 deep:15 dfa:35 useful:6 fake:1 clear:1 visualized:1 mcclelland:1 reduced:2 exist:1 sign:4 neuroscience:1 write:1 prevent:1 neither:1 douglas:1 leibo:1 ht:3 backward:6 asymptotically:1 enforced:1 inverse:1 angle:1 injected:1 named:1 almost:2 wu:1 separation:1 maaten:1 dropout:9 layer:85 hi:47 ki:4 ct:1 activity:2 adapted:5 constraint:1 alex:1 bp:32 nitish:1 prescribed:4 according:1 disconnected:6 wi:3 shallow:1 biologically:8 invariant:1 trondheim:1 visualization:1 turn:3 mechanism:3 cownden:1 end:1 available:2 away:2 appropriate:4 indirectly:3 alternative:1 batch:4 thomas:1 top:3 running:1 include:1 ensure:1 publishing:1 ifa:4 prof:1 question:2 added:3 quantity:4 spike:1 fa:25 visiting:2 gradient:18 separate:3 evenly:1 fy:2 reason:2 index:1 mini:4 kk:9 minimizing:2 sne:3 gk:3 negative:1 akerman:1 proper:2 boltzmann:3 teh:1 neuron:14 discarded:1 hyun:2 descent:4 ecml:1 extended:1 hinton:7 discovered:1 introduced:2 david:1 required:3 connection:4 optimized:2 w2t:1 x50:1 barcelona:1 nip:1 able:11 below:2 dynamical:2 pattern:1 max:1 reciprocally:3 belief:1 mn:2 scheme:1 improve:1 lk:17 epoch:4 understanding:1 deepest:1 tangent:2 review:1 multiplication:1 discovery:1 fully:1 loss:9 permutation:1 lecture:1 proven:1 geoffrey:4 validation:1 h2:2 sufficient:1 propagates:1 principle:10 squashing:1 row:3 elsewhere:1 last:2 transpose:1 keeping:1 bias:5 deeper:3 explaining:1 van:1 feedback:67 calculated:7 cortical:1 dimension:3 curve:3 doesn:1 forward:23 commonly:1 author:1 pseudoinverse:1 overfitting:1 incoming:1 b1:6 discriminative:1 don:1 surya:1 why:1 table:6 learn:4 transported:1 nature:2 ignoring:2 symmetry:2 investigated:1 aistats:1 qianli:1 arrow:2 whole:1 arise:1 repeated:1 fails:1 stray:1 momentum:1 perceptual:1 jmlr:2 third:3 hti:2 learns:5 ian:1 theorem:6 saturate:1 down:1 bad:1 xt:1 explored:2 decay:2 mnist:12 adding:1 corr:6 ci:8 magnitude:4 gap:1 easier:3 suited:1 entropy:3 rg:1 timothy:1 penrose:1 visual:1 scalar:1 springer:1 tieleman:1 satisfies:1 towards:3 change:5 experimentally:1 except:3 reducing:1 called:3 pas:6 discriminate:1 experimental:5 puzzling:1 internal:1 support:1 srivastava:1 |
6,016 | 6,442 | Computational and Statistical Tradeoffs in Learning
to Rank
Ashish Khetan and Sewoong Oh
Department of ISE, University of Illinois at Urbana-Champaign
Email: {khetan2,swoh}@illinois.edu
Abstract
For massive and heterogeneous modern data sets, it is of fundamental interest to
provide guarantees on the accuracy of estimation when computational resources
are limited. In the application of learning to rank, we provide a hierarchy of rankbreaking mechanisms ordered by the complexity in thus generated sketch of the
data. This allows the number of data points collected to be gracefully traded off
against computational resources available, while guaranteeing the desired level
of accuracy. Theoretical guarantees on the proposed generalized rank-breaking
implicitly provide such trade-offs, which can be explicitly characterized under
certain canonical scenarios on the structure of the data.
1
Introduction
In classical statistical inference, we are typically interested in characterizing how more data points
improve the accuracy, with little restrictions or considerations on computational aspects of solving
the inference problem. However, with massive growths of the amount of data available and also
the complexity and heterogeneity of the collected data, computational resources, such as time and
memory, are major bottlenecks in many modern applications. As a solution, recent advances in
[7, 23, 8, 1, 16] introduce hierarchies of algorithmic solutions, ordered by the respective computational
complexity, for several fundamental machine learning applications. Guided by sharp analyses on the
sample complexity, these approaches provide theoretically sound guidelines that allow the analyst the
flexibility to fall back to simpler algorithms to enjoy the full merit of the improved run-time.
Inspired by these advances, we study the time-data tradeoff in learning to rank. In many applications
such as election, policy making, polling, and recommendation systems, we want to aggregate individual preferences to produce a global ranking that best represents the collective social preference.
Learning to rank is a rank aggregation approach, which assumes that the data comes from a parametric
family of choice models, and learns the parameters that determine the global ranking. Traditionally,
each revealed preference is assumed to have one of the following three structures. Pairwise comparison, where one item is preferred over another, is common in sports and chess matches. Best-out-of-?
comparison, where one is chosen among a set of ? alternatives, is common in historical purchase
data. ?-way comparison, where we observe a linear ordering of a set of ? candidates, is used in some
elections and surveys. For such traditional preferences, efficient schemes for learning to rank have
been proposed, e.g. [12, 9]. However, modern data sets are unstructured and heterogeneous. This can
lead to significant increase in the computational complexity, requiring exponential run-time in the
size of the problem in the worst case [15].
To alleviate this computational challenge, we propose a hierarchy of estimators which we call
generalized rank-breaking, ordered in increasing computational complexity and achieving increasing
accuracy. The key idea is to break down the heterogeneous revealed preferences into simpler pieces
of ordinal relations, and apply an estimator tailored for those simple structures treating each piece as
independent. Several aspects of rank-breaking makes this problem interesting and challenging. A
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
priori, it is not clear which choices of the simple ordinal relations are rich enough to be statistically
efficient and yet lead to tractable estimators. Even if we identify which ordinal relations to extract,
the ignored correlations among those pieces can lead to an inconsistent estimate, unless we choose
carefully which pieces to include and which to omit in the estimation. We further want sharp analysis
on the sample complexity, which reveals how computational and statistical efficiencies trade off. We
would like to address all these challenges in providing generalized rank-breaking methods.
Problem formulation. We study the problem of aggregating ordinal data based on users? preferences
that are expressed in the form of partially ordered sets (poset). A poset is a collection of ordinal
relations among items. For example, consider a poset {(i6 ? {i5 , i4 }), (i5 ? i3 ), ({i3 , i4 } ?
{i1 , i2 })} over items {i1 , . . . , i6 }, where (i6 ? {i5 , i4 }) indicates that item i5 and i4 are both
preferred over item i6 . Such a relation is extracted from, for example, the user giving a 2-star rating
to i5 and i4 and a 1-star to i6 . Assuming that the revealed preference is consistent, a poset can be
represented as a directed acyclic graph (DAG) Gj as below.
Gj
i5
i3
e1
i1
e2
i2
i6
i6
i4
i1
i2
i1
i3
i6
i3
i2
i5
i4
i5
i4
Figure 1: An example of Gj for user j?s consistent poset, and two rank-breaking hyper edges extracted
from it: e1 = ({i6 , i5 , i4 , i3 } ? {i2 , i1 }) and e2 = ({i6 } ? {i5 , i4 , i3 }).
We assume that each user j is presented with a subset of items Sj , and independently provides her
ordinal preference in the form of a poset, where the ordering is drawn from the Plackett-Luce (PL)
model. The PL model is a popular choice model from operations research and psychology, used to
model how people make choices under uncertainty. It is a special case of random utility models, where
each item i is parametrized by a latent true utility ?i ? R. When offered with Sj , the user samples
the perceived utility Ui for each item independently according to Ui = ?i + Zi , where Zi ?s are i.i.d.
noise. In particular, the PL model assumes Zi ?s follow the standard Gumbel distribution. Although
statistical and computational tradeoff has been studied under Mallows models [6] or stochastically
transitive models [22], the techniques we develop are different and have a potential to generalize to
analyze more general class of random utility models. The observed poset is a partial observation of
the ordering according to this perceived utilities.
The particular choice of the Gumbel distribution has several merits, largely stemming from the fact
that the Gumbel distribution has a log-concave pdf and is inherently memoryless. In our analyses, we
use the log-concavity to show that our proposed algorithm is a concave maximization (Remark 2.1)
and the memoryless property forms the basis of our rank-breaking idea. Precisely, the PL model is
statistically equivalent to the following procedure. Consider a ranking as a mapping from a rank to an
item, i.e. ?j : [|Sj |] ? Sj . It can be shown that the PL model is generated by first independently
assigning each item i ? Sj an unobserved value Yi , exponentially distributed with mean e??i , and
the resulting ranking ?j is inversely ordered in Yi ?s so that Y?j (1) ? Y?j (2) ? ? ? ? ? Y?j (|Sj |) .
This inherits the memoryless property of exponential variables, such that P(Y1 < Y2 < Y3 ) =
P(Y1 < {Y2 , Y3 })P(Y2 < Y3 ), leading to a simple interpretation of the PL model as sequential
choices: P(i3 ? i2 ? i1 ) = P({i3 , i2 } ? i1 )P(i3 ? i2 ) = (e?i1 /(e?i1 +e?i2 +e?i3 ))?(e?i2 /(e?i2 +
Q|Sj |?1 ??? (i) P|Sj | ??? (i0 )
e?i3 )). In general, we have P[?j ] P
= i=1
(e j )/( i0 =i e j ). We assume that the true utility
?
d
? ? ?b where ?b = {? ? R | i?[d] ?i = 0, |?i | ? b for all i ? [d]}. Notice that centering of ?
ensures its uniqueness as PL model is invariant under shifting of ?. The bound b on ?i is written
explicitly to capture the dependence in our main results.
We denote a set of n users by [n] = {1, . . . , n} and the set of d items by [d]. Let Gj denote the DAG
representation of the poset provided by the user j over Sj ? [d] according to the PL model with
weights ?? . The maximum likelihood estimate (MLE) maximizes the sum of all possible rankings
2
that are consistent with the observed Gj for each j:
X
X
n
b
? ? arg max
P? [?]
,
log
???b
j=1
(1)
??Gj
where we slightly abuse the notation Gj to denote the set of all rankings ? that are consistent with
the observation. When Gj has a traditional structure as explained earlier in this section, then the
optimization is a simple multinomial logit regression, that can be solved efficiently with off-the-shelf
convex optimization tools [12]. For general posets, it can be shown that the above optimization is
a concave maximization, using similar techniques as Remark 2.1. However, the summation over
rankings in Gj can involve number of terms super exponential in the size |Sj |, in the worst case. This
renders MLE intractable and impractical.
Pairwise rank-breaking. A common remedy to this computational blow-up is to use rank-breaking.
Rank-breaking traditionally refers to pairwise rank-breaking, where a bag of all the pairwise comparisons is extracted from observations {Gj }j?[n] and is applied to estimators that are tailored for
pairwise comparisons, treating each paired outcome as independent. This is one of the motivations
behind the algorithmic advances in learning from pairwise comparisons [19, 21, 17].
It is computationally efficient to apply maximum likelihood estimator assuming independent pairwise
comparisons, which takes O(d2 ) operations to evaluate. However, this computational gain comes at
the cost of statistical efficiency. It is known from [4] that if we include all paired comparisons, then
the resulting estimate can be statistically inconsistent due to the ignored correlations among the paired
orderings, even with infinite samples. In the example from Figure 1, there are 12 paired relations:
(i6 ? i5 ), (i6 ? i4 ), (i6 ? i3 ), . . . , (i3 ? i1 ), (i4 ? i1 ). In order to get a consistent estimate, [4]
provides a rule for choosing which pairs to include, and [15] provides an estimator that optimizes
how to weigh each of those chosen pairs to get the best finite sample complexity bound. However,
such a consistent pairwise rank-breaking results in throwing away many of the ordered relations,
resulting in significant loss in accuracy. For example, none of the pairwise orderings can be used
from Gj in the example, without making the estimator inconsistent [3]. Whether we include all paired
comparisons or only a subset of consistent ones, there is a significant loss in accuracy as illustrated in
Figure 2. For the precise condition for consistent rank-breaking we refer to [3, 4, 15].
The state-of-the-art approaches operate on either one of
Pthe two extreme points on the computational
and statistical trade-off. The MLE in (1) requires O( j?[n] |Sj |!) summations to just evaluate the
objective function, in the worst case. On the other hand, the pairwise rank-breaking requires only
O(d2 ) summations, but suffers from significant loss in the sample complexity. Ideally, we would
like to give the analyst the flexibility to choose a target computational complexity she is willing to
tolerate, and provide an algorithm that achieves the optimal trade-off at any operating point.
Contribution. We introduce a novel generalized rank-breaking that bridges the gap between MLE
and pairwise rank-breaking. Our approach allows the user the freedom to choose the level of
computational resources to be used, and provides an estimator tailored for the desired complexity.
We prove that the proposed estimator is tractable and consistent, and provide an upper bound on the
error rate in the finite sample regime. The analysis explicitly characterizes the dependence on the
topology of the data. This in turn provides a guideline for designing surveys and experiments in
practice, in order to maximize the sample efficiency. We provide numerical experiments confirming
the theoretical guarantees.
2
Generalized rank-breaking
Given Gj ?s representing the users? preferences, generalized rank-breaking extracts a set of ordered
relations and applies an estimator treating each ordered relation as independent. Concretely, for
each Gj , we first extract a maximal ordered partition Pj of Sj that is consistent with Gj . An ordered
partition is a partition with a linear ordering among the subsets, e.g. Pj = ({i6 } ? {i5 , i4 , i3 } ?
{i2 , i1 }) for Gj from Figure 1. This is maximal, since we cannot further partition any of the subsets
without creating artificial ordered relations that are not present in the original Gj .
The extracted ordered partition is represented by a directed hypergraph Gj (Sj , Ej ), which we
call a rank-breaking graph. Each edge e = (B(e), T (e)) ? Ej is a directed hyper edge from a
subset of nodes B(e) ? Sj to another subset T (e) ? Sj . The number of edges in Ej is |Pj | ? 1
3
where |Pj | is the number of subsets in the partition. For each subset in Pj except for the least
preferred subset, there is a corresponding edge whose top-set T (e) is the subset, and the bottom-set
B(e) is the set of all items less preferred than T (e). In Figure 1, for Ej = {e1 , e2 } we show
e1 = (B(e1 ), T (e1 )) = ({i6 , i5 , i4 , i3 }, {i2 , i1 }) and e2 = (B(e2 ), T (e2 ) = ({i6 }, {i5 , i4 , i3 })
extracted from Gj . Denote the probability that T (e) is preferred over B(e) when T (e) ? B(e) is
offered as
P
|T (e)|
exp
X
c=1 ??(c)
(2)
P? (e) = P? B(e) ? T (e) =
P
Q|T (e)| P|T (e)|
???T (e)
c0 =u exp ??(c0 ) +
u=1
i?B(e) exp (?i )
which follows from the definition of the PL model, where ?T (e) is the set of all rankings over
T (e). The computational complexity of evaluating this probability is dominated by the size of the
top-set |T (e)|, as it involves (|T (e)|!) summations. We let the analyst choose the order M ? Z+
depending on how much computational resource is available, and only include those edges with
|T (e)| ? M in the following step. We apply the MLE for comparisons over paired subsets, assuming
all rank-breaking graphs are independently drawn. Precisely, we propose order-M rank-breaking
estimate, which is the solution that maximizes the log-likelihood under the independent assumption:
X
X
?b ? arg max LRB (?) , where
LRB (?) =
log P? (e) .
(3)
???b
j?[n] e?Ej :|T (e)|?M
In a special case when M = 1, this can be transformed into the traditional pairwise rank-breaking,
where (i) this is a concave maximization; (ii) the estimate is (asymptotically) unbiased and consistent
[3, 4]; and (iii) and the finite sample complexity have been analyzed [15]. Although, this order-1
rank-breaking provides a significant gain in computational efficiency, the information contained in
higher-order edges are unused, resulting in a significant loss in sample efficiency.
We provide the analyst the freedom to choose the computational complexity he/she is willing to
tolerate. However, for general M , it has not been known if the optimization in (3) is tractable
and/or if the solution is consistent. Since P? (B(e) ? T (e)) as explicitly written in (2) is a sum of
log-concave functions, it is not clear if the sum is also log-concave. Due to the ignored dependency
in the formulation (3), it is not clear if the resulting estimate is consistent. We first establish that
it is a concave maximization in Remark 2.1, then prove consistency in Remark 2.2, and provide a
sharp analysis of the performance in the finite sample regime, characterizing the trade-off between
computation and sample size in Section 4. We use the Random Utility Model (RUM) interpretation of
the PL model to prove concavity. We refer to Appendix A in the supplementary material for a proof.
Remark 2.1. LRB (?) is concave in ? ? Rd .
For consistency, we consider a simple but canonical scenario for sampling ordered relations. However,
we study a general sampling scenario, when we analyze the order-M estimator in the finite sample
regime in Section 4. Following is the canonical sampling scenario. There is a set of `? integers
(m
? 1, . . . , m
? `?) whose sum is strictly less than d. A new arriving user is presented with all d items
and is asked to provide her top m
? 1 items as an unordered set, and then the next m
? 2 items, and so on.
?
This is sampling from the PL model and observing an ordered partition with (` + 1) subsets of sizes
m
? a ?s, and the last subset includes all remaining items. We apply the generalized rank-breaking to get
rank-breaking graphs {Gj } with `? edges each, and order-M estimate is computed. We show that this
is consistent, i.e. asymptotically unbiased in the limit of the number of users n. A proof is provided
in the supplementary material.
Remark 2.2. Under the PL model and the above sampling scenario, the order-M rank-breaking
estimate ?b in (3) is consistent for all choices of M ? mina?`? m
? a.
Figure 2 (left) illustrates the trade-off between run-time and sample size necessary to achieve a fixed
accuracy: MSE? 0.3d2 ? 10?6 . In the middle panel, we show the accuracy-sample tradeoff for
increasing computation M on the same data. We fix d = 256, `? = 5, m
? a = a for a ? {1, 2, 3, 4, 5},
and sample posets from the canonical scenario, except that each user is presented ? = 32 random
?
items. The PL weights are chosen i.i.d. U [?2, 2]. On the right panel, we let m
? a = 3 for all a ? [`]
?
and vary `. We compare GRB with M = 3 to PRB, and an oracle estimator who knows the exact
ordering among those top three items and runs MLE.
4
Time (s)
M =5
1000
10-4
C k?b ? ?? k22
4
inconsistent PRB
GRB order M = 1
2
3
4
inconsistent PRB
GRB order M=3
oracle lower bound
CR lower bound
600
1
M =4
10-5
200
M =3
-6
10
M =2
100
0.1
M =1
0.05
10
5
10
6
4
5
10
sample size n
6
10
10
1
2
4
8
16 20
number of edges |Ej |
sample size n
Figure 2: The time-data trade-off for fixed accuracy (left) and accuracy improvement for increased
computation M (middle). Generalized Rank-Breaking (GRB) achieves the oracle lower bound and
significantly improves upon Pairwise Rank-Breaking (PRB) (right).
Notations. Given rank-breaking graphs {Gj (Sj , Ej )}j?[n] extracted from the posets {Gj }, we first
(M )
(M )
(M )
define the order M rank-breaking graphs {Gj (Sj , Ej )}, where Ej
is a subset of Ej that
includes only those edges ej ? Ej with |T (ej )| ? M . This represents those edges that are included
in the estimation for a choice of M . For finite sample analysis, the following quantities capture
(M )
how the error depends on the topology of the data collected. Let ?j ? |Sj | and `j ? |Ej |. We
(M )
index each edge ej in Ej
by a ? [`j ] and define mj,a ? |T (ej,a )| for the a-th edge of the j-th
rank-breaking graph and rj,a ? |T (ej,a )| + |B(ej,a )|. Note that, we use tilde in subscript with mj,a
and `j when M is equal to Sj . ThatPis `?j is the number of edges in Ej and m
? j,a is the size of the
top-sets in those edges. We let pj ? a?[`j ] mj,a denote the effective sample size for the observation
P
(M )
Gj , such that the total effective sample size is j?[n] pj . Notice that although we do not explicitly
write the dependence on M , all of the above quantities implicitly depend on the choice of M .
3
Comparison graph
The analysis of the optimization in (3) shows that, with high probability, LRB (?) is strictly concave
with ?2 (H(?)) ? ?Cb ?1 ?2 ?3 ?2 (L) < 0 for all ? ? ?b (Lemma C.3), and the gradient is also
?1/2 P
bounded with k?LRB (?? )k ? Cb0 ?2
( j pj log d)1/2 (Lemma C.2). the quantities ?1 , ?2 , ?3 ,
and ?2 (L), to be defined shortly, represent the topology of the data. This leads to Theorem 4.1:
qP
?
j pj log d
2k?LRB (? )k
,
(4)
k?b ? ?? k2 ?
? Cb00
3/2
??2 (H(?))
?1 ? ?3 ?2 (L)
2
Cb , Cb0 ,
where
and
are constants that only depend on b, and ?2 (H(?)) is the second largest
eigenvalue of a negative semidefinite Hessian matrix H(?) of LRB (?). Recall that ?> 1 = 0 since
we restrict our search in ?b . Hence, the error depends on ?2 (H(?)) instead of ?1 (H(?)) whose
corresponding eigen vector is the all-ones vector.
P We define a comparison graph H([d], E) as a
weighted undirected graph with weights Aii0 = j?[n]:i,i0 ?Sj pj /(?j (?j ? 1)). The corresponding
graph Laplacian is defined as:
Cb00
L ?
n
X
j=1
X
pj
(ei ? ei0 )(ei ? ei0 )> .
?j (?j ? 1) 0
(5)
i<i ?Sj
It is immediate that
P ?1 (L) = 0 with 1 as the eigenvector. There are remaining d ? 1 eigenvalues that
sum to Tr(L) = j pj . The rescaled ?2 (L) and ?d (L) capture the dependency on the topology:
??
?2 (L)(d ? 1)
,
Tr(L)
??
Tr(L)
.
?d (L)(d ? 1)
(6)
In an ideal case where the graph is well connected, then the spectral gap of the Laplacian is large.
This ensures all eigenvalues are of the same order and ? = ? = ?(1), resulting in a smaller error
5
rate. The concavity of LRB (?) also depends on the following quantities. We discuss the role of the
topology in Section 4. Note that the quantities defined in this section implicitly depend on the choice
of M , which controls the necessary computational power, via the definition of the rank-breaking
{Gj,a }. We define the following quantities that control our upper bound. ?1 incorporates asymmetry
in probabilities of items being ranked at different positions depending upon their weight ?i? . It is 1
for b = 0 that is when all the items have same weight, and decreases exponentially with increase in b.
?2 controls the range of the size of the top-set with respect to the size of the bottom-set for which the
error decays with the rate of 1/(size of the top-set). The dependence in ?3 and ? are due to weakness
in the analysis, and ensures that the Hessian matrix is strictly negative definite.
?1
?3
4
2e2b ?2
2
rj,a ? mj,a
rj,a ? mj,a
? min
, ?2 ? min
, and
j,a
j,a
?j
rj,a
16b
2
mj,a ?2j
m2j,a rj,a
?2j
4e
,
?
?
max
.
? 1 ? max
j,a
j,a
?1 (rj,a ? mj,a )5
(rj,a ? mj,a )2
(7)
(8)
Main Results
We present main theoretical analyses and numerical simulations confirming the theoretical predictions.
4.1
Upper bound on the achievable error
We provide an upper bound on the error for the order-M rank-breaking approach, showing the
explicit dependence on the topology of the data. We assume each user provides a partial ranking
according to his/her ordered partitions. Precisely, we assume that the set of offerings Sj , the number
of subsets (`?j + 1), and their respective sizes (m
? j,1 , . . . , m
? j,`?j ) are predetermined. Each user
randomly draws a ranking of items from the PL model, and provides the partial ranking of the
form ({i6 } ? {i5 , i4 , i3 } ? {i2 , i1 }) in the example in Figure 1. For a choice of M , the order-M
rank-breaking graph is extracted from this data. The following theorem provides an upper bound on
the achieved error, and a proof is provided in the supplementary material.
Theorem 4.1. Suppose there are n users, d items parametrized by ?? ? ?b , and each user j ? [n] is
presented with a set of offerings Sj ? [d] and provides a partial
Pn ordering under the PL model. For a
choice of M ? Z+ , if ?3 > 0 and the effective sample size j=1 pj is large enough such that
n
X
j=1
pj ?
214 e20b ? 2 pmax
d log d ,
(??1 ?2 ?3 )2 ? ?min
(9)
where b ? maxi |?i? | is the dynamic range, pmax = maxj?[n] pj , ?min = minj?[n] ?j , ? is the
(rescaled) spectral gap, ? is the (rescaled) spectral radius in (6), and ?1 , ?2 , ?3 , and ? are defined in
(7) and (8), then the generalized rank-breaking estimator in (3) achieves
s
1
40e7b
d log d
? k?b ? ?? k ?
,
(10)
Pn P`j
3/2
d
??1 ?2 ?3
a=1 mj,a
j=1
with probability at least 1 ? 3e3 d?3 . Moreover, for M ? 3 the above bound holds with ?3 replaced
by one, giving a tighter result.
Note that the dependence on the choice of M is not explicit in the bound, but ratherPis implicit
in the
P
construction of the comparison graph and the number of effective samples N = j a?[`j ] mj,a .
1/2
In an ideal case, b = O(1) and mj,a = O(rj,a ) for all (j, a) such that ?1 , ?2 are finite. further, if
the spectral gap is large such that ? > 0 and ? > 0, then Equation (10) implies that we need the
effective sample size to scale as O(d log d), which is only a logarithmic factor larger than the number
?
of parameters. In this ideal case, there exist universal constants C1 , C2 such that if mj,a < C1 rj,a
and rj,a > C2 ?j for all {j, a}, then the condition ?3 > 0 is met. Further, when rj,a = O(?j,a ),
max ?j,a /?P
j 0 ,a0 = O(1), and max pj,a /pj 0 ,a0 = O(1), then condition on the effective sample size
is met with j pj = O(d log d). We believe that dependence in ?3 is weakness of our analysis and
there is no dependence as long as mj,a < rj,a .
6
4.2
Lower bound on computationally unbounded estimators
Recall that `?j ? |Ej |, m
? j,a = |T (ea )| and r?j,a = |T (ea ) ? B(ea )| when M = Sj . We prove a
fundamental lower bound on the achievable error rate that holds for any unbiased estimator even with
no restrictions on the computational complexity. For each (j, a), define ?j,a as
?j,a =
m
? j,a ?1
X
u=0
u(m
? j,a ? u)
1
+
+
r?j,a ? u m
? j,a (?
rj,a ? u)2
2
= m
? 2j,a /(3?
rj,a ) + O(m
? 3j,a /?
rj,a
).
X
u<u0 ?[m
? j,a ?1]
2u
m
? j,a ? u0
m
? j,a (?
rj,a ? u) r?j,a ? u0
(11)
(12)
Theorem 4.2. Let U denote the set of all unbiased estimators of ? that are centered such that
b = 0, and let ? = max
?1
? j,a ? ?j,a }. For all b > 0,
j?[n],a?[`?j ] {m
?
?
d
?
2
X
1
1 ?
(d ? 1)
. (13)
,
inf sup E[k?b ? ?? k2 ] ? max P
? n P`?j (m
b
? i=2 ?i (L) ?
? ? ??b
??U
?
?
?
)
j,a
j,a
a=1
j=1
?
The proof relies on the Cramer-Rao bound and is provided in the supplementary material. Since
2
?P
j,a ?s
Pare non-negative, the mean squared error is lower bounded by (d ? 1) /N , where N =
m
?
is
the
effective
sample
size.
Comparing
it
to
the
upper
bound
in
(10), this is tight
j,a
j
a?`?j
up to a logarithmic factor when (a) the topology of the data is well-behaved such that all respective
quantities are finite; and (b) there is no limit on the computational power and M can be made as large
as we need. The bound in Eq. (13) further gives a tighter lower bound, capturing the dependency
in ?j,a ?s and ?i (L)?s. Considering the first term, ?j,a is larger when m
? j,a is close to r?j,a , giving a
tighter bound. The second term in (13) implies we get a tighter bound when ?2 (L) is smaller.
C k?b ? ?? k22
inconsistent PRB
GRB order M=m
oracle lower bound
CR lower bound
inconsistent PRB
GRB order M=m
oracle lower bound
CR lower bound
b=2
1
0.5
0.2
CR lower bound
100
1
1
10
1
1
2
3
size of top-set m
4
5
1
2
3
size of top-set m
4
5
5
6
8
10
16
32
64
set-size ?
Figure 3: Accuracy degrades as (? ? m) gets small and as the dynamic range b gets large.
In Figure 3 left and middle panel, we compare performance of our algorithm with pairwise breaking,
Cramer Rao lower bound and oracle MLE lower bound. We fix d = 512, n = 105 , ?? chosen i.i.d.
uniformly over [?2, 2]. Oracle MLE knows relative ordering of items in all the top-sets T (e) and
hence is strictly better than the GRB. We fix `? = ` = 1 that is r = ?, and vary m . In the left
panel, we fix ? = 32 and in the middle panel, we fix ? = 16. Perhaps surprisingly, GRB matches
with the oracle MLE which means relative ordering of top-m items among themselves is statistically
insignificant when m is sufficiently small in comparison to ?. For ? = 16, as m gets large, the
error starts to increase as predicted by our analysis. The reason is that the quantities ?1 and ?2
gets smaller as m increases, and the upper bound increases consequently. In the right panel, we fix
m = 4. When ? is small, ?2 is small, and hence error is large; when b is large ?1 is exponentially
small, and hence error is significantly large. This is different from learning Mallows models, where
peaked distributions are easier to learn [2], and is related to the fact that we are not only interested in
recovering the (ordinal) ranking but also the (cardinal) weight.
4.3
Computational and statistical tradeoff
For estimators with limited computational power, however, the above lower bound fails to capture the
dependency on the allowed computational power. Understanding such fundamental trade-offs is a
challenging problem, which has been studied only in a few special cases, e.g. planted clique problem
7
[10, 18]. This is outside the scope of this paper, and we instead investigate the trade-off achieved
by the proposed rank-breaking approach. When we are limited on computational power, Theorem
4.1 implicitly captures this dependence when order-M rank-breaking is used. The dependence is
captured indirectly via the resulting rank-breaking {Gj,a }j?[n],a?[`j ] and the topology of it. We
make this trade-off explicit by considering a simple but canonical example. Suppose ?? ? ?b with
b = O(1). Each user gives an i.i.d. partial ranking,
where all items are offered and the partial ranking
?
is based on an ordered partition with `?j = b 2cd1/4 c subsets. The top subset has size m
? j,1 =?1, and
the a-th subset has size m
? j,a = a, up to a < `?j , in order to ensure that they sum at most to c d for
sufficiently small positive constant c and the condition on ?3 > 0 is satisfied. The last subset includes
all the remaining items in the bottom, ensuring m
? j,`?j ? d/2 and ?1 , ?2 and ? are all finite.
Computation. For a choice of M such that M ? `j ? 1,Pwe consider
P the computational complexity
in evaluating the gradient of LRB , which scales as TM = j?[n] a?[M ] (mj,a !)rj,a = O(M !?dn).
Note that we find the MLE by solving a convex optimization problem using first order methods, and
detailed analysis of the convergence rate and the complexity of solving general convex optimizations
is outside the scope of this paper.
Sample. Under the canonical setting, for M ? `j ?1, we have L = M (M +1)/(2d(d?1)) I?11> .
This complete graph has P
the largest possible spectral gap, and hence ? > 0 and ? > 0. Since the
effective samples size is j,a m
? j,a I{m
? j,a ? M } = nM (M + 1)/2, it follows from Theorem 4.1
p
that the (rescaled) root mean squared error is O( p(d log d)/(nM 2 )). In order to achieve a target
error rate of ?, we need to choose M = ?((1/?) (d log d)/n). The resultingptrade-off between
run-time and sample to achieve root mean squared error ? is T (n) ? (d(1/?) (d log d)/ne)!dn.
We show numerical experiment under this canonical setting in Figure 2 (left) with d = 256 and
M ? {1, 2, 3, 4, 5}, illustrating the trade-off in practice.
4.4
Real-world data sets
On sushi preferences [14] and jester dataset [11], we improve over pairwise breaking and achieves
same performance as the oracle MLE. Full rankings over ? = 10 types of sushi are randomly chosen
from d = 100 types of sushi are provided by n = 5000 individuals. As the ground truth ?? , we use the
ML estimate of PL weights over the entire data. In Figure 4, left panel, for each m ? {3, 4, 5, 6, 7},
we remove the known ordering among the top-m and bottom-(10 ? m) sushi in each set, and run
our estimator with one breaking edge between top-m and bottom-(10 ? m) items. We compare our
algorithm with inconsistent pairwise breaking (using optimal choice of parameters from [15]) and
the oracle MLE. For m ? 6, the proposed rank-breaking performs as well as an oracle who knows
the hidden ranking among the top m items. Jester dataset consists of continuous ratings between
?10 to +10 of 100 jokes on sets of size ?, 36 ? ? ? 100, by 24, 983 users. We convert ratings into
full rankings. The ground truth ?? is computed similarly. For m ? {2, 3, 4, 5}, we convert each full
ranking into a poset that has ` = b?/mc partitions of size m, by removing known relative ordering
from each partition. Figure 4 compares the three algorithms using all samples (middle panel), and by
varying the sample size (right panel) for fixed m = 4. All figures are averaged over 50 instances.
1
C k?b ? ?? k22
inconsistent PRB
GRB order M = m
oracle lower bound
inconsistent PRB
GRB order M = m
oracle lower bound
0.1
10
0.1
0.01
inconsistent PRB
GRB order M = 4
oracle lower bound
1
0.01
3
4
5
6
size of top-set m
7
2
3
4
size of top-sets m
5
0.001
1000
10000
sample size n
Figure 4: Generalized rank-breaking improves over pairwise RB and is close to oracle MLE.
Acknowledgements
This work is supported by NSF SaTC award CNS-1527754, and NSF CISE award CCF-1553452.
8
References
[1] A. Agarwal, P. L. Bartlett, and J. C. Duchi. Oracle inequalities for computationally adaptive model
selection. arXiv preprint arXiv:1208.0129, 2012.
[2] A. Ali and M. Meil?a. Experiments with kemeny ranking: What works when? Mathematical Social
Sciences, 64(1):28?40, 2012.
[3] H. Azari Soufiani, W. Chen, D. C Parkes, and L. Xia. Generalized method-of-moments for rank aggregation.
In Advances in Neural Information Processing Systems 26, pages 2706?2714, 2013.
[4] H. Azari Soufiani, D. Parkes, and L. Xia. Computing parametric ranking models via rank-breaking. In
Proceedings of The 31st International Conference on Machine Learning, pages 360?368, 2014.
[5] H. Azari Soufiani, D. C. Parkes, and L. Xia. Random utility theory for social choice. In NIPS, pages
126?134, 2012.
[6] N. Betzler, R. Bredereck, and R. Niedermeier. Theoretical and empirical evaluation of data reduction for
exact kemeny rank aggregation. Autonomous Agents and Multi-Agent Systems, 28(5):721?748, 2014.
[7] O. Bousquet and L. Bottou. The tradeoffs of large scale learning. In Advances in neural information
processing systems, pages 161?168, 2008.
[8] V. Chandrasekaran and M. I. Jordan. Computational and statistical tradeoffs via convex relaxation.
Proceedings of the National Academy of Sciences, 110(13):E1181?E1190, 2013.
[9] Y. Chen and C. Suh. Spectral mle: Top-k rank aggregation from pairwise comparisons. arXiv:1504.07218,
2015.
[10] Y. Deshpande and A. Montanari. Improved sum-of-squares lower bounds for hidden clique and hidden
submatrix problems. arXiv preprint arXiv:1502.06590, 2015.
[11] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative filtering
algorithm. Information Retrieval, 4(2):133?151, 2001.
[12] B. Hajek, S. Oh, and J. Xu. Minimax-optimal inference from partial rankings. In Advances in Neural
Information Processing Systems 27, pages 1475?1483, 2014.
[13] T. P. Hayes. A large-deviation inequality for vector-valued martingales. Combinatorics, Probability and
Computing, 2005.
[14] T. Kamishima. Nantonac collaborative filtering: recommendation based on order responses. In Proceedings
of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages
583?588. ACM, 2003.
[15] A. Khetan and S. Oh. Data-driven rank breaking for efficient rank aggregation. In International Conference
on Machine Learning, 2016.
[16] M. Lucic, M. I. Ohannessian, A. Karbasi, and A. Krause. Tradeoffs for space, time, data and risk in
unsupervised learning. In AISTATS, 2015.
[17] L. Maystre and M. Grossglauser. Fast and accurate inference of plackett-luce models. In Advances in
Neural Information Processing Systems 28 (NIPS 2015), 2015.
[18] R. Meka, A. Potechin, and A. Wigderson. Sum-of-squares lower bounds for planted clique. In Proceedings
of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 87?96. ACM, 2015.
[19] S. Negahban, S. Oh, and D. Shah. Rank centrality: Ranking from pair-wise comparisons. preprint
arXiv:1209.1688, 2014.
[20] A. Pr?kopa. Logarithmic concave measures and related topics. In Stochastic programming, 1980.
[21] N. B. Shah, S. Balakrishnan, J. Bradley, A. Parekh, K. Ramchandran, and M. J. Wainwright. Estimation
from pairwise comparisons: Sharp minimax bounds with topology dependence. arXiv:1505.01462, 2015.
[22] N. B. Shah, S. Balakrishnan, A. Guntuboyina, and M. J. Wainright. Stochastically transitive models for
pairwise comparisons: Statistical and computational issues. arXiv preprint arXiv:1510.05610, 2015.
[23] S. Shalev-Shwartz and N. Srebro. Svm optimization: inverse dependence on training set size. In
Proceedings of the 25th international conference on Machine learning, pages 928?935. ACM, 2008.
9
| 6442 |@word illustrating:1 middle:5 achievable:2 logit:1 c0:2 d2:3 willing:2 simulation:1 tr:3 reduction:1 moment:1 offering:2 e2b:1 khetan:2 bradley:1 comparing:1 yet:1 assigning:1 written:2 stemming:1 numerical:3 partition:11 confirming:2 predetermined:1 remove:1 treating:3 item:28 parkes:3 provides:10 node:1 preference:10 simpler:2 unbounded:1 mathematical:1 dn:2 c2:2 symposium:1 prove:4 consists:1 introduce:2 pairwise:20 theoretically:1 themselves:1 multi:1 lrb:9 inspired:1 little:1 election:2 considering:2 increasing:3 spain:1 provided:5 notation:2 bounded:2 maximizes:2 panel:9 moreover:1 what:1 eigenvector:1 unobserved:1 impractical:1 guarantee:3 y3:3 concave:10 growth:1 k2:2 control:3 enjoy:1 omit:1 positive:1 aggregating:1 sushi:4 limit:2 meil:1 subscript:1 abuse:1 studied:2 challenging:2 limited:3 range:3 statistically:4 averaged:1 directed:3 mallow:2 poset:9 practice:2 definite:1 procedure:1 universal:1 empirical:1 significantly:2 refers:1 get:8 cannot:1 close:2 selection:1 risk:1 e1181:1 restriction:2 equivalent:1 independently:4 convex:4 survey:2 unstructured:1 estimator:18 rule:1 oh:4 his:1 traditionally:2 autonomous:1 hierarchy:3 target:2 suppose:2 massive:2 user:18 exact:2 construction:1 goldberg:1 designing:1 programming:1 observed:2 bottom:5 role:1 preprint:4 solved:1 capture:5 worst:3 ensures:3 connected:1 azari:3 soufiani:3 ordering:12 trade:11 rescaled:4 decrease:1 weigh:1 complexity:17 ui:2 hypergraph:1 ideally:1 asked:1 dynamic:2 depend:3 solving:3 tight:1 ali:1 upon:2 efficiency:5 basis:1 represented:2 m2j:1 fast:1 effective:8 artificial:1 aggregate:1 hyper:2 outcome:1 ise:1 choosing:1 outside:2 whose:3 shalev:1 supplementary:4 larger:2 valued:1 eigenvalue:3 propose:2 maximal:2 pthe:1 flexibility:2 achieve:3 academy:1 convergence:1 asymmetry:1 produce:1 guaranteeing:1 posets:3 depending:2 develop:1 eq:1 recovering:1 predicted:1 involves:1 come:2 implies:2 met:2 grb:11 guided:1 radius:1 stochastic:1 centered:1 material:4 fix:6 khetan2:1 alleviate:1 tighter:4 summation:4 strictly:4 pl:16 hold:2 sufficiently:2 cramer:2 ground:2 exp:3 cb:2 algorithmic:2 mapping:1 traded:1 scope:2 major:1 achieves:4 vary:2 perceived:2 estimation:4 uniqueness:1 bag:1 bridge:1 largest:2 tool:1 weighted:1 offs:2 satc:1 super:1 i3:18 pn:2 shelf:1 ej:21 cr:4 varying:1 inherits:1 she:2 improvement:1 rank:53 indicates:1 likelihood:3 nantonac:1 sigkdd:1 inference:4 plackett:2 roeder:1 i0:3 typically:1 entire:1 a0:2 maystre:1 hidden:3 her:3 relation:11 transformed:1 i1:15 interested:2 polling:1 issue:1 arg:2 among:9 priori:1 jester:2 art:1 special:3 equal:1 sampling:5 represents:2 unsupervised:1 peaked:1 purchase:1 cardinal:1 few:1 modern:3 randomly:2 national:1 individual:2 maxj:1 replaced:1 cns:1 freedom:2 interest:1 investigate:1 mining:1 evaluation:1 weakness:2 analyzed:1 extreme:1 semidefinite:1 behind:1 accurate:1 edge:16 partial:7 necessary:2 potechin:1 respective:3 unless:1 desired:2 theoretical:5 increased:1 instance:1 earlier:1 rao:2 maximization:4 cost:1 deviation:1 subset:19 seventh:1 dependency:4 st:1 fundamental:4 international:4 negahban:1 rankbreaking:1 off:12 ashish:1 squared:3 satisfied:1 nm:2 choose:6 stochastically:2 creating:1 leading:1 potential:1 blow:1 star:2 unordered:1 includes:3 combinatorics:1 explicitly:5 ranking:22 depends:3 piece:4 break:1 root:2 observing:1 sup:1 analyze:2 characterizes:1 aggregation:5 start:1 contribution:1 collaborative:2 square:2 accuracy:11 largely:1 efficiently:1 who:2 identify:1 generalize:1 none:1 mc:1 parekh:1 minj:1 suffers:1 email:1 definition:2 centering:1 against:1 deshpande:1 e2:6 proof:4 prb:9 gain:2 dataset:2 popular:1 recall:2 knowledge:1 improves:2 hajek:1 carefully:1 ea:3 back:1 tolerate:2 higher:1 follow:1 response:1 improved:2 formulation:2 just:1 implicit:1 correlation:2 sketch:1 hand:1 ei:2 perhaps:1 behaved:1 believe:1 k22:3 requiring:1 true:2 y2:3 remedy:1 unbiased:4 hence:5 ccf:1 memoryless:3 i2:14 illustrated:1 pwe:1 generalized:11 pdf:1 mina:1 complete:1 performs:1 duchi:1 lucic:1 wise:1 consideration:1 novel:1 common:3 multinomial:1 qp:1 exponentially:3 interpretation:2 he:1 significant:6 refer:2 dag:2 meka:1 rd:1 swoh:1 consistency:2 i6:17 similarly:1 illinois:2 e1190:1 operating:1 gj:25 recent:1 optimizes:1 inf:1 driven:1 scenario:6 certain:1 inequality:2 yi:2 captured:1 determine:1 maximize:1 forty:1 ii:1 u0:3 full:4 sound:1 rj:17 champaign:1 match:2 characterized:1 long:1 retrieval:1 e1:6 mle:14 award:2 paired:6 laplacian:2 ensuring:1 prediction:1 regression:1 heterogeneous:3 arxiv:9 represent:1 tailored:3 agarwal:1 achieved:2 c1:2 want:2 krause:1 operate:1 undirected:1 balakrishnan:2 inconsistent:11 incorporates:1 jordan:1 call:2 integer:1 unused:1 revealed:3 iii:1 enough:2 ideal:3 psychology:1 zi:3 topology:9 restrict:1 idea:2 tm:1 tradeoff:8 luce:2 bottleneck:1 whether:1 utility:8 bartlett:1 kopa:1 render:1 e3:1 hessian:2 remark:6 ignored:3 clear:3 involve:1 detailed:1 ohannessian:1 amount:1 exist:1 canonical:7 nsf:2 notice:2 rb:1 write:1 grossglauser:1 key:1 achieving:1 drawn:2 pj:18 guntuboyina:1 graph:15 asymptotically:2 relaxation:1 sum:8 convert:2 run:6 inverse:1 i5:15 uncertainty:1 family:1 chandrasekaran:1 draw:1 appendix:1 submatrix:1 capturing:1 bound:35 oracle:16 annual:1 i4:16 precisely:3 throwing:1 perkins:1 dominated:1 bousquet:1 aspect:2 min:4 department:1 according:4 smaller:3 slightly:1 making:2 chess:1 ei0:2 explained:1 invariant:1 pr:1 karbasi:1 computationally:3 resource:5 equation:1 turn:1 discus:1 mechanism:1 ordinal:7 merit:2 know:3 tractable:3 available:3 operation:2 apply:4 observe:1 away:1 spectral:6 indirectly:1 centrality:1 alternative:1 shortly:1 eigen:1 shah:3 original:1 assumes:2 top:18 include:5 remaining:3 ensure:1 wigderson:1 giving:3 establish:1 classical:1 objective:1 quantity:8 parametric:2 degrades:1 dependence:12 planted:2 traditional:3 joke:1 kemeny:2 gradient:2 parametrized:2 gracefully:1 topic:1 collected:3 reason:1 cb0:2 analyst:4 assuming:3 index:1 providing:1 negative:3 pmax:2 guideline:2 collective:1 policy:1 upper:7 observation:4 eigentaste:1 urbana:1 finite:9 tilde:1 heterogeneity:1 immediate:1 precise:1 y1:2 ninth:1 sharp:4 rating:3 pair:3 barcelona:1 nip:3 address:1 below:1 regime:3 challenge:2 max:8 memory:1 shifting:1 power:5 wainwright:1 ranked:1 representing:1 scheme:1 improve:2 pare:1 minimax:2 inversely:1 ne:1 transitive:2 extract:3 understanding:1 acknowledgement:1 discovery:1 relative:3 loss:4 interesting:1 filtering:2 wainright:1 acyclic:1 srebro:1 agent:2 offered:3 consistent:15 sewoong:1 surprisingly:1 last:2 supported:1 arriving:1 allow:1 fall:1 characterizing:2 distributed:1 xia:3 evaluating:2 rum:1 rich:1 concavity:3 world:1 concretely:1 collection:1 made:1 adaptive:1 historical:1 social:3 sj:24 implicitly:4 preferred:5 clique:3 ml:1 global:2 reveals:1 hayes:1 assumed:1 shwartz:1 search:1 latent:1 continuous:1 suh:1 mj:14 learn:1 inherently:1 mse:1 bottou:1 aistats:1 main:3 montanari:1 motivation:1 noise:1 allowed:1 xu:1 martingale:1 fails:1 position:1 explicit:3 exponential:3 candidate:1 breaking:45 learns:1 down:1 theorem:6 removing:1 showing:1 maxi:1 decay:1 insignificant:1 gupta:1 svm:1 intractable:1 sequential:1 ramchandran:1 illustrates:1 gumbel:3 gap:5 easier:1 chen:2 logarithmic:3 cd1:1 expressed:1 ordered:16 contained:1 sport:1 partially:1 recommendation:2 applies:1 truth:2 relies:1 extracted:7 kamishima:1 acm:5 consequently:1 cise:1 included:1 infinite:1 except:2 uniformly:1 lemma:2 total:1 people:1 evaluate:2 |
6,017 | 6,443 | Gaussian Processes for Survival Analysis
Tamara Fern?ndez
Department of Statistics,
University of Oxford.
Oxford, UK.
fernandez@stats.ox.ac.uk
Nicol?s Rivera
Department of Informatics,
King?s College London.
London, UK.
nicolas.rivera@kcl.ac.uk
Yee Whye Teh
Department of Statistics,
University of Oxford.
Oxford, UK.
y.w.teh@stats.ox.ac.uk
Abstract
We introduce a semi-parametric Bayesian model for survival analysis. The model
is centred on a parametric baseline hazard, and uses a Gaussian process to model
variations away from it nonparametrically, as well as dependence on covariates.
As opposed to many other methods in survival analysis, our framework does not
impose unnecessary constraints in the hazard rate or in the survival function. Furthermore, our model handles left, right and interval censoring mechanisms common
in survival analysis. We propose a MCMC algorithm to perform inference and an
approximation scheme based on random Fourier features to make computations
faster. We report experimental results on synthetic and real data, showing that our
model performs better than competing models such as Cox proportional hazards,
ANOVA-DDP and random survival forests.
1
Introduction
Survival analysis is a branch of statistics focused on the study of time-to-event data, usually called
survival times. This type of data appears in a wide range of applications such as failure times
in mechanical systems, death times of patients in a clinical trial or duration of unemployment in
a population. One of the main objectives of survival analysis is the estimation of the so-called
survival function and the hazard function. If a random variable has density function f and cumulative
distribution function F , then its survival function S is 1 ? F , and its hazard ? is f /S. While the
survival function S(t) gives us the probability a patient survives up to time t, the hazard function
?(t) is the instant probability of death given that she has survived until t.
Due to the nature of the studies in survival analysis, the data contains several aspects that make
inference and prediction hard. One important characteristic of survival data is the presence of many
covariates. Another distinctive flavour of survival data is the presence of censoring. A survival time
is censored when it is not fully observable but we have an upper or lower bound of it. For instance,
this happens in clinical trials when a patient drops out the study.
There are many methods for modelling this type of data. Arguably, the most popular is the KaplanMeier estimator [13]. The Kaplan-Meier estimator is a very simple, nonparametric estimator of the
survival function. It is very flexible and easy to compute, it handles censored times and requires
no-prior knowledge of the nature of the data. Nevertheless, it cannot handle covariates naturally and
no prior knowledge can be incorporated. A well-known method that incorporates covariates is the
Cox proportional hazard model [3]. Although this method is very popular and useful in applications,
a drawback of it, is that it imposes the strong assumption that the hazard curves are proportional and
non-crossing, which is very unlikely for some data sets.
There is a vast literature of Bayesian nonparametric methods for survival analysis [9]. Some examples
include the so-called Neutral-to-the-right priors [5], which models survival curves as e???((0,t]) , where
?
? is a completely random measure on R+ . Two common choices for ?
? are the Dirichlet process
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
[8] and the beta-Stacy process [20], the latter, being a bit more tractable due its conjugacy. Other
alternatives place a prior on the hazard function, one example of this, is the extended gamma process
[7]. The weakness of the above methods is that there is no natural nor direct way to incorporate
covariates and thus, they have not been extensively used by practitioners of survival analysis. More
recently, [4] developed a new model called ANOVA-DDP which mixes ideas from ANOVA and
Dirichlet processes. This method successfully incorporates covariates without imposing strong
constraints, though it is not clear how to incorporate expert knowledge. Within the context of
Gaussian process, a few models has been considered, for instance [14] and [12]. Nevertheless these
models fail to go beyond the proportional hazard assumption, which corresponds to one of the aims
of this work. Another option is [11], which describes a survival model with non-proportional hazard
and time-dependent covariates. Recently, we became aware of the work of [2], which uses a so-called
accelerated failure times model. Here, the dependence of the failure times on covariates is modelled
by rescaling time, with the rescaling factor modelled as a function of covariates with a Gaussian
process prior. This model is different from our proposal, and is more complex to study and to work
with.
Lastly, another well-known method is Random Survival Forest [10]. This can be seen as a generalisation of Kaplan Meier estimator to several covariates. It is fast and flexible, nevertheless it cannot
incorporate expert knowledge and lacks interpretation which is fundamental for survival analysis.
In this paper we introduce a new semiparametric Bayesian model for survival analysis. Our model is
able to handle censoring and covariates. Our approach models the hazard function as the multiplication
of a parametric baseline hazard and a nonparametric part. The parametric part of our model allows
the inclusion of expert knowledge and provides interpretability, while the nonparametric part allows
us to handle covariates and to amend incorrect or incomplete prior knowledge. The nonparametric
part is given by a non-negative function of a Gaussian process on R+ .
Giving the hazard function ? of a random variable T , we sample from it by simulating the first jump
of a Poisson process with intensity ?. In our case, the intensity of the Poisson process is a function of
a Gaussian process, obtaining what is called a Gaussian Cox process. One of the main difficulties of
working with Gaussian Cox processes is the problem of learning the ?true? intensity given the data
because, in general, it is impossible to sample the whole path of a Gaussian process. Nevertheless,
exact inference was proved to be tractable by [1]. Indeed, the authors developed an algorithm by
exploiting a nice trick which allows them to make inference without sampling the whole Gaussian
process but just a finite number of points.
In this paper, we study basic properties of our prior. We also provide an inference algorithm based in
a sampler proposed by [18] which is a refined version of the algorithm presented in [1]. To make
the algorithm scale we introduce a random Fourier features to approximate the Gaussian process
and we supply the respective inference algorithm. We demonstrate the performance of our method
experimentally by using synthetic and real data.
2
Model
Consider a continuous random variable T on R+ = [0, ?), with density function f and cumulative
distribution function F . Associated with T , we have the survival function S = 1 ? F and the hazard
function ? = f /S. The survival function S(t) gives us the probability a patient survives up to time t,
while the hazard function ?(t) gives us the instant risk of patient at time t.
We define a Gaussian process prior over the hazard function ?. In particular, we choose ?(t) =
?0 (t)?(l(t)), where ?0 (t) is a baseline hazard function, l(t) is a centred stationary Gaussian process
with covariance function ?, and ? is a positive link function. For our implementation, we choose ? as
the sigmoidal function ? = (1 + e?x )?1 , which is a quite standard choice in applications. In this
way, weR generate T as the first jump of the Poisson process with intensity ?, i.e. T has distribution
t
?(t)e? 0 ?(s)ds . Our model for a data set of i.i.d. Ti , without covariates, is
l(?) ? GP(0, ?),
?(t)|l, ?0 (t) = ?0 (t)?(l(t)),
iid
Ti |? ? ?(t)e?
R Ti
0
?(s)ds
,
(1)
which can be interpreted as a baseline hazard with a multiplicative nonparametric noise. This is an
attractive feature as an expert may choose a particular hazard function and then the nonparametric
noise amends an incomplete or incorrect prior knowledge. The incorporation of covariates is discussed
later in this section, while censoring is discussed in section 3.
2
Notice that E(?(X)) = 1/2 for a zero-mean Gaussian random variable. Then, as we are working
with a centred Gaussian process, it holds that E(?(t)) = ?0 (t)E(?(l(t))) = ?0 (t)/2. Hence, we can
imagine our model as a random hazard centred in ?0 (t)/2 with a multiplicative noise. In the simplest
scenario, we may take a constant baseline hazard ?0 (t) = 2? with ? > 0. In such case, we obtain a
random hazard centred in ?, which is simply the hazard function of a exponential random variable
with mean 1/?. Another choice might be ?0 (t) = 2?t??1 , which determines a random hazard
function centred in ?t??1 , which corresponds to the hazard function of the Weibull distribution, a
popular default distribution in survival analysis.
In addition to the hierarchical model in (1), we include hyperparameters to the kernel ? and to the
baseline hazard ?0 (t). In particular for the kernel, it is common to include a length scale parameter
and an overall variance.
Finally, we need to ensure the model we proposed defines a well-defined survival function, i.e.
S(t) ? 0 as t tends to infinity. This is not trivial as our random survival function is generated by
a Gaussian process. The next proposition, proved in the supplemental material, states that under
suitable regularity conditions, the prior defines proper survival functions.
Proposition 1. Let (l(t))t?0 ? GP(0, ?) be a stationary continuous Gaussian process. Suppose
that ?(s) is non-increasing and that lims?? ?(s) = 0. Moreover, assume it exists K > 0 and ? > 0
such that ?0 (t) ? Kt??1 for all t ? 1. Let S(t) be the random survival function associated with
(l(t))t?0 , then limt?? S(t) = 0 with probability 1.
Note the above proposition is satisfied by the hazard functions of the Exponential and Weibull
distributions.
2.1
Adding covariates
We model the relation between time and covariates by the kernel of the Gaussian process prior. A
simple way to generate kernels in time and covariates is to construct kernels for each covariate and
time, and then perform basic operation of them, e.g. addition or multiplication. Let (t, X) denotes a
time t and with covariates X ? Rd . Then for pairs (t, X) and (s, Y ) we can construct kernels like
?
? 0 (t, s) + Pd K
? j (Xj , Yj ),
K((t,
X), (s, Y )) = K
j=1
or, the following kernel, which is the one we use in our experiments,
Pd
K((t, X), (s, Y )) = K0 (t, s) + j=1 Xj Yj Kj (t, s).
(2)
Observe that the first kernel establishes an additive relation between time and covariates while
the second creates an interaction between the value of the covariates and time. More complicated
structures that include more interaction between covariates can be considered. We refer to the work
of [6] for details about the construction and interpretation of the operations between kernels. Observe
the new kernel produces a Gaussian process from the space of time and covariates to the real line, i.e
it has to be evaluated in a pair of time and covariates.
The new model to generate Ti , assuming we are given the covariates Xi , is
l(?) ? GP(0, K),
?i (t)|l, ?0 (t), Xi = ?0 (t)?(l(t, Xi )),
indep
Ti |?i ? ?(Ti )e?
R Ti
0
?i (s)ds
, (3)
In our construction of the kernel K, we choose all kernels Kj as stationary kernels (e.g. squared
exponential), so that K is stationary with respect to time, so proposition 1 is valid for each fixed
covariate X, i.e. giving a fix covariate X, we have SX (t) = P(T > t|X) ? 0 as t ? ?.
3
3.1
Inference
Data augmentation scheme
Notice thatR the likelihood of the model in equation (3) has to deal with terms of the form
t
?i (t) exp? 0 ?i (s)ds as these expressions come from the density of the first jump of a nonhomogeneous Poisson process with intensity ?i . In general the integral is not analytically tractable
since ?i is defined by a Gaussian process. A numerical scheme can be used, but it is approximate and
3
computationally expensive. Following [1] and [18], we develop a data augmentation scheme based
on thinning a Poisson process that allows us to efficiently avoid a numerical method.
If we want to sample a time T with covariate X, as given in equation (3), we can use the following
generative process. Simulate a sequence of points g1 , g2 , . . . of points distributed according a Poisson
process with intensity ?0 (t). We assume the user is using a well-known parametric form, then
sampling the points g1 , g2 , . . . is tractable (in the Weibull case this can be easily done). Starting from
k = 1 we accept the point gk with probability ?(l(gk , X)). If it is accepted we set T = gk , otherwise
we try the point gk+1 and repeat. We denote by G the set of rejected point, i.e. if we accepted gk , then
G = {g1 , . . . , gk?1 }. Note the above sampling procedure needs to evaluate the Gaussian process in
the points (gk , X) instead the whole space.
Following the above scheme to sample T , the following proposition can be shown.
RT
Proposition 2. Let ?0 (t) = 0 ?0 (t)dt, then
?
?
?
?
Y
Y
??
(T
)
p(G, T |?0 , l(t)) = ??0 (T )
?0 (g)? e 0 ??(l(T ))
(1 ? ?(l(g)))?
g?G
(4)
g?G
Proof sketch. Consider a Poisson process on [0, ?) with intensity ?0 (t). Then, the first term in the
RHS of equation (4) is the density of putting points exactly in G ? {T }. The second term is the
probability of putting no points in [0, T ] \ (G ? {T }), i.e. e??0 (T ) . The second term is independent
of the first one. The last term comes from the acceptance/rejection part of the process. The points
g ? G are rejected with probability 1 ? ?(g), while the point T is accepted with probability ?(T ).
Since the acceptance/rejection of points is independent of the Poisson process we get the result.
Using the above proposition, the model of equation (1) can be reformulated as the following tractable
generative model:
Y
l(?) ? GP(0, K), (G, T )|?0 (t), l(t) ? e??0 (T ) (?(l(T ))?0 (T ))
(1 ? ?(l(g)))?0 (g). (5)
g?G
Our model states a joint distribution for the pair (G, T ) where G is the set of rejected jump point of
the thinned Poisson process and T is the first accepted one.
To perform inference we need data (Gi , Ti , Xi ), whereas we only receive points (Ti , Xi ). Thus, we
need to sample the missing data Gi given (Ti , Xi ). The next proposition gives us a way to do this.
Proposition 3. [18] Let T be a data point with covariate X and let G be its set of rejected points.
Then the distribution of G given (T, X, ?0 , l) is distributed as a non-homogeneous Poisson process
with intensity ?0 (t)(1 ? ?(l(t, X))) on the interval [0, T ].
3.2
Inference algorithm
The above data augmentation scheme suggests the following inference algorithm. For each data
point (Ti , Xi ) sample Gi |(Ti , Xi , ?0 , l), then sample l|((Gi , Ti , Xi )ni=1 , ?0 ), where n is the number
of data points. Observe that the sampling of l given (Gi , Ti , Xi )ni=1 , ?0 ) can be seen as a Gaussian
process binary classification problem, where the points Gi and Ti represent two different classes. A
variety of MCMC techniques can be used to sample l, see [15] for details.
n
For our algorithm we use the following notation. We denote
Snthe dataset as (Ti , Xi )i=1 . The set Gi
refers to the set of rejected points of Ti . We denote G = i=1 Gi and T = {T1 , . . . , Tn } for the
whole set of rejected and accepted points, respectively. For a point t ? Gi ? {Ti } we denote l(t)
instead of l(t, Xi ), but remember that each point has an associated covariate. For a set of points A
Rt
we denote l(A) = {l(a) : a ? A}. Also ?0 (t) refers to 0 ?0 (s)ds and ?0 (t)?1 denotes its inverse
function (it exists since ?0 (t) is increasing). Finally, N denotes the number of iterations we are going
to run our algorithm. The pseudo code of our algorithm is given in Algorithm 1.
Lines 2 to 11 sample the set of rejected points Gi for each survival time Ti . Particularly lines 3
to 5 use the Mapping theorem, which tells us how to map a homogeneous Poisson process into a
non-homogeneous with the appropriate intensity. Observe it makes uses of the function ?0 and its
4
Algorithm 1: Inference Algorithm.
Input: Set of times T and the Gaussian proces l instantiated in T and other initial parameters
1 for q=1:N do
2
for i=1:n do
3
ni ? Poisson(1; ?0 (Ti ));
4
C?i ? U (ni ; 0, ?0 (Ti ));
?
5
Set Ai = ??1
0 (Ai );
6
Set A = ?ni=1 Ai
7
Sample l(A)|l(G ? T ), ?0
8
for i=1:n do
9
Ui ? U (ni ; 0, 1)
10
set G(i) = {a ? Ai such that Ui < 1 ? ?(l(a))}
11
12
13
Set G = ?ni=1 Gi
Update parameters of ?0 (t)
Update l(G ? T ) and hyperparameter of the kernel.
inverse function, which shall be provided or be easily computable. The following lines classify the
points drawn from the Poisson process with intensity ?0 in the set Gi as in proposition 3. Line 7 is
used to sample the Gaussian process in the set of points A given the values in the current set G ? T .
Observe that at the beginning of the algorithm, we have G = ?.
3.3
Adding censoring
Usually, in Survival analysis, we encounter three types of censoring: right, left and interval censoring.
We assume each data point Ti is associated with an (observable) indicator ?i , denoting the type of
censoring or if the time is not censored. We describe how the algorithm described before can easily
handle any type of censorship.
Right censorship: In presence of right censoring, the likelihood for a survival time Ti is S(Ti ). The
related event in terms of the rejected points correspond to do not accept any location [0, Ti ). Hence,
we can treat right censorship in the same way as the uncensored case, by just sampling from the
distribution of the rejected jump times prior Ti . In this case, Ti is not an accepted location, i.e. Ti is
not considered in the set T of line 7 nor 13.
Left censorship: In this set-up, we know the survival time is at most Ti , then the likelihood of
such time is F (Ti ). Treating this type of censorship is slightly more difficult than the previous case
because the event is more complex. We ask for accepting at least one jump time prior Ti , which
might leads us to have a larger set of latent variables. In order to avoid this, we proceed by imputing
the ?true? survival time Ti0 by using its truncated distribution on [0, Ti ]. Then we proceed using Ti0
(uncensored) instead of Ti . We can sample Ti0 as following: we sample the first point of a Poisson
process with the current intensity ?, if such point is after Ti we reject the point and repeat the process
until we get one. The imputation step has to be repeated at the beginning of each iteration.
Interval censorship: If we know that survival time lies in the interval I = [Si , Ti ] we can deal with
interval censoring in the same way as left censoring but imputing the survival time Ti0 in I.
4
Approximation scheme
As shown is algorithm 1, in line 7 we need to sample the Gaussian process (l(t))t?0 in the set of
points A from its conditional distribution, while in line 13, we have to update (l(t))t?0 in the set
G ? T . Both lines require matrix inversion which scales badly for massive datasets or for data T that
generates a large set G. In order to help the inference we use a random feature approximation of the
Kernel [17].
We exemplify the idea on the kernel we use in our experiment, which is given by K((t, X), (s, Y )) =
Pd
K0 (t, s) + j=1 Xj Yj Kj (t, s), where each Kj is a square exponential kernel, with overall variance
5
?j2 and length scale parameter ?j Hence, for m ? 0, the approximation of our Gaussian process is
given by
Pd
g m (t, X) = g0m (t) + j=1 Xj gjm (t)
(6)
Pm
where each gjm (t) = k=1 ajk cos(sjk t) + bjk sin(sjk t), and each ajk and bjk are independent samples
of N (0, ?j2 ) where ?j2 is the overall variance of the kernel Kj . Moreover, sjk are independent samples
of N (0, 1/(2??j )) where ?j is the length scale parameter of the kernel Kj . Notice that g m (t, X) is
a Gaussian process since each gjm (t) is the sum of independent normally distributed random variables.
It is know that as m goes to infinity, the kernel of g m (t, X) approximates the kernel Kj . The above
approximation can be done for any stationary kernel and we refer the reader to [17] for details.
The inference algorithm for this scheme is practically the same, except for two small changes. The
values l(A) in line 7 are easier to evaluate because we just need to know the values of the ajk and bjk ,
and no matrix inversion is needed. In line 13 we just need to update all values akj and bkj . Since they
are independent variables there is no need for matrix inversion.
5
Experiments
All the experiments are performed using our approximation scheme of equation (6) with a value of
m = 50. Recall that for each Gaussian process, we used a squared exponential kernel with overall
variance ?j2 and length scale parameter ?j . Hence for a set of d covariates we have a set of 2(d + 1)
hyper-parameters associated to the Gaussian processes. In particular, we follow a Bayesian approach
and place a log-Normal prior for the length scale parameter ?j , and a gamma prior (inverse gamma is
also useful since it is conjugate) for the variance ?j2 . We use elliptical slice sampler [16] for jointly
updating the set of coefficients {ajk , bjk } and length-scale parameters.
With respect the baseline hazard we consider two models. For the first option, we choose the baseline
hazard 2?t??1 of a Weibull random variable. Following a Bayesian approach, we choose a gamma
prior on ? and a uniform U (0, 2.3) on ?. Notice the posterior distribution for ? is conjugate and thus
we can easily sample from it. For ?, use a Metropolis step to sample from its posterior. Additionally,
observe that for the prior distribution of ?, we constrain the support to (0, 2.3). The reason for this is
because the expected size of the set G increases with respect to ? and thus slow down computations.
As second alternative is to choose the baseline hazard as ?0 (t) = 2?, with gamma prior over the
parameter ?. The posterior distribution of ? is also gamma. We refer to both models as the Weibull
model (W-SGP) and the Exponential model (E-SGP) respectively.
The implementation for both models is exactly the same as in Algorithm 1 and uses the same hyperparameters described before. As the tuning of initial parameters can be hard, we use the maximum
likelihood estimator as initial parameters of the model.
5.1
Synthetic Data
In this section we present experiments made with synthetic data. Here we perform the experiment
proposed in [4] for crossing data. We simulate n = 25, 50, 100 and 150 points from each of the
following densities, p0 (t) = N (3, 0.82 ) and p1 (t) = 0.4N (4, 1) + 0.6N (2, 0.82 ), restricted to R+ .
The data contain the sample points and a covariate indicating if such points were sampled from the
p.d.f p0 or p1 . Additionally, to each data point, we add 3 noisy covariates taking random values in the
interval [0, 1]. We report the estimations of the survival functions for the Weibull model in figure 1
while the results for the Exponential model are given in the supplemental material.
It is clear that for the clean data (without extra noisy covariates), the more data the better the
estimation. In particular, the model perfectly detects the cross in the survival functions. For the noisy
data we can see that with few data points the noise seems to have an effect in the precision of our
estimation in both models. Nevertheless, the more points the more precise is our estimate for the
survival curves. With 150 points, each group seems to be centred on the corresponding real survival
function, independent of the noisy covariates.
We finally remark that for the W-SGP and E-SGP models, the prior of the hazards are centred in a
Weibull and a Exponential hazard, respectively. Since the synthetic data does not come from those
6
1.00
0.75
0.50
0.25
0.00
? ??
?
????
?
?????? ??
???
?
?
0
2
0
2
4
? ?? ???
??
??
?
?
?????
??
?
???
?
?????
??
???
?
?
? ? ? ?? ?
0
2
?
4
?
0
?
?
??
?
?
?
?
??
????
?
?
?
??
??
?
?
?
?
??
?
?
??
?
?
?
?
??
?
??
??
?
?
?
??
?
?
????
???
??? ?
2
4
?
6
0
6
0
?
?
?
??
?
?
?
?
?
??
????
??
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
??
?
?
?
???
??
???
??? ?
2
4
6
1.00
0.75
0.50
0.25
0.00
? ??
?
????
?
?????? ??
???
?
?
4
? ?? ???
??
??
?
?
?????
??
?
???
?
?????
??
???
?
?
? ? ? ?? ?
0
2
?
4
0
?
?
?
??
?
?
?
?
??
????
?
?
?
??
??
?
?
?
?
??
?
?
??
?
?
?
?
??
?
??
??
?
?
?
??
?
?
????
???
??? ?
2
4
?
?
?
?
??
?
?
?
?
?
??
????
??
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
??
?
?
?
???
??
???
??? ?
2
4
6
Figure 1: Weibull Model. First row: clean data, Second row: data with noise covariates. Per columns
we have 25, 50, 100 and 150 data points per each group (shown in X-axis) and data is increasing from
left to right. Dots indicate data is generated from p0 , crosses, from p1 . In the first row a credibility
interval is shown. In the second row each curve for each combination of noisy covariate is given.
distributions, it will be harder to approximate the true survival function with few data. Indeed, we
observe our models have problems at estimating the survival functions for times close to zero.
5.2
Real data experiments
To compare our models we use the so-called concordance index. The concordance index is a standard
measure in survival analysis which estimates how good the model is at ranking survival times.
We consider a set of survival times with their respective censoring indices and set of covariates
(T1 , ?1 , X1 ), . . . , (Tn , ?n , Xn ). On this particular context, we just consider right censoring.
To compute the C-index, consider all possible pairs (Ti , ?i , Xi ; Tj , ?j , Xj ) for i 6= j. We call a
pair admissible if it can be ordered. If both survival times are right-censored i.e. ?i = ?j = 0 it is
impossible to order them, we have the same problem if the smallest of the survival times in a pair is
censored, i.e. Ti < Tj and ?i = 0. All the other cases under this context will be called admissible.
Given just covariates Xi , Xj and the status ?i , ?j , the model has to predict if Ti < Tj or the other way
around. We compute the C-index by considering the number of pairs which were correctly sorted by
the model, given the covariates, over the number of admissible pairs. A larger C-index indicates the
model is better at predicting which patient dies first by observing the covariates. If the C-index close
to 0.5, it means the prediction made by the model is close to random.
We run experiments on the Veteran data, avaiable in the R-package survival package [19]. Veteran
consists of a randomized trial of two treatment regimes for lung cancer. It has 137 samples and
5 covariates: treatment indicating the type of treatment of the patients, their age, the Karnofsky
performance score, and indicator for prior treatment and months from diagnosis. It contains 9
censored times, corresponding to right censoring.
In the experiment we run our Weibull model (W-SGP) and Exponential model (E-SGP), ANOVA
DDP, Cox Proportional Hazard and Random Survival Forest. We perform 10-fold cross validation
and compute the C-index for each fold. Figure 2 reports the results.
For this dataset the only significant variable corresponds to the Karnofsky performance score. In
particular as the values of this covariate increases, we expect an improved survival time. All the
studied models achieve such behaviour and suggest a proportionality relation between the hazards.
This is observable in the C-Index boxplot we can observe good results for proportional hazard rates.
7
1.00
?
?
?
?
?
0.8
1.00
All data
Score 30, treatment 1
Score 30, treatment 2
Score 90, treatment 1
Score 90, treatment 2
0.75
0.75
?
E?SGP
? Score 10
? Score 20
? Score 30
? Score 40
? Score 50
? Score 60
? Score 70
? Score 75
? Score 80
? Score 85
? Score 90
? Score 99
COX (step)
ANOVA?DDP
ESGP (smooth)
0.50
0.50
S(t)
S(t)
C?Index
0.7
0.25
0.6
0.25
0.00
0.5
0.00
ANOVA?DDP
COX
E?SGP
RSF
??
?
? ?
?
? ?
?
?
??? ?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
???
??
?
??
??
?
?? ??
???? ?
?
?
? ? ? ? ??
?
??
0
W?SGP
?
?
250
?
?
??
?
500
750
1000
?
??
??
?
?
??
?
?
?
?? ?
??
?
?
?
?
?
??
?
?
??? ?
?
?
?
??
?
??
?
??
? ? ???
?
??
?
?
??
??
? ??
??
?? ? ?
?
??
?
??
?? ?
?? ?
?
?
??? ? ??
??
?
?
?
?
? ?? ?
? ?
???
?
?
?
?
??
?? ? ?
?
? ?
?
? ?
?
?
?
?
0
time
?
?
?
?
250
?
500
750
1000
time
Figure 2: Left: C-Index for ANOVA-DDP,COX,E-SGP,RSF,W-SGP; Middle: Survival curves
obtained for the combination of score: 30, 90 and treatments: 1 (standard) and 2 (test); Right:
Survival curves, using W-SGP, across all scores for fixed treatment 1, diagnosis time 5 moths, age 38
and no prior therapy. (Best viewed in colour)
0.75
0.50
S(t)
S(t)
0.50
COX
? Score 10
? Score 20
? Score 30
? Score 40
? Score 50
? Score 60
? Score 70
? Score 75
? Score 80
? Score 85
? Score 90
? Score 99
0.75
0.50
0.25
0.25
0.25
?
??
??
?
?
??
?
?
?
?? ?
??
?
?
?
?
?
??
?
?
??? ?
?
?
?
??
?
??
?
??
? ? ???
?
??
?
?
??
??
? ??
??
?? ? ?
?
??
?
??
?? ?
?? ?
?
?
??? ? ??
??
?
?
?
?
? ?? ?
? ?
???
?
?
?
?
??
?? ? ?
?
? ?
?
? ?
?
?
?
?
0
250
0.00
?
?
?
?
?
500
750
1000
RSF
? Score 10
? Score 20
? Score 30
? Score 40
? Score 50
? Score 60
? Score 70
? Score 75
? Score 80
? Score 85
? Score 90
? Score 99
S(t)
ANOVA?DDP
? Score 10
? Score 20
? Score 30
? Score 40
? Score 50
? Score 60
? Score 70
? Score 75
? Score 80
? Score 85
? Score 90
? Score 99
0.75
0.00
1.00
1.00
1.00
0.00
?
??
??
?
?
??
?
?
?
?? ?
??
?
?
?
?
?
??
?
?
??? ?
?
?
?
??
?
??
?
??
? ? ???
?
??
?
?
??
??
? ??
??
?? ? ?
?
??
?
??
?? ?
?? ?
?
?
??? ? ??
??
?
?
?
?
? ?? ?
? ?
???
?
?
?
?
??
?? ? ?
?
? ?
?
? ?
?
?
?
?
0
250
?
?
?
?
500
time
?
time
750
1000
?
??
??
?
?
??
?
?
?
?? ?
??
?
?
?
?
?
??
?
?
??? ?
?
?
?
??
?
??
?
??
? ? ???
?
??
?
?
??
??
? ??
??
?? ? ?
?
??
?
??
?? ?
?? ?
?
?
??? ? ??
??
?
?
?
?
? ?? ?
? ?
???
?
?
?
?
??
?? ? ?
?
? ?
?
? ?
?
?
?
?
0
250
?
?
?
?
?
500
750
1000
time
Figure 3: Survival curves across all scores for fixed treatment 1, diagnosis time 5 months, age 38 and
no prior therapy. Left: ANOVA-DDP; Middle: Cox proportional; Right: Random survival forests.
Nevertheless, our method detect some differences between the treatments when the Karnofsky
performance score is 90, as it can be seen in figure 2.
For the other competing models we observe an overall good result. In the case of ANOVA-DDP we
observe the lowest C-INDEX. In figure 3 we see that ANOVA-DDP seems to be overestimating the
Survival function for lower scores. Arguably, our survival curves are more visually pleasant than Cox
proportional hazards and Random Survival Trees.
6
Discussion
We introduced a Bayesian semiparametric model for survival analysis. Our model is able to deal with
censoring and covariates. In can incorporate a parametric part, in which an expert can incorporate his
knowledge via the baseline hazard but, at the same time, the nonparametric part allows the model to
be flexible. Future work consist in create a method to choose initial parameter to avoid sensitivity
problems at the beginning. Construction of kernels that can be interpreted by an expert is something
desirable as well. Finally, even though the random features approximation is a good approach and
helped us to run our algorithm in large datasets, it is still not sufficient for datasets with a massive
number of covariates, specially if we consider a large number of interactions between covariates.
Acknowledgments
YWT?s research leading to these results has received funding from the European Research Council
under the European Union?s Seventh Framework Programme (FP7/2007-2013) ERC grant agreement
no. 617071. Tamara Fern?ndez and Nicol?s Rivera were supported by funding from Becas CHILE.
8
References
[1] Ryan Prescott Adams, Iain Murray, and David JC MacKay. Tractable nonparametric bayesian
inference in poisson processes with gaussian process intensities. In Proceedings of the 26th
Annual International Conference on Machine Learning, pages 9?16. ACM, 2009.
[2] James E Barrett and Anthony CC Coolen. Gaussian process regression for survival data with
competing risks. arXiv preprint arXiv:1312.1591, 2013.
[3] DR Cox. Regression models and life-tables. Journal of the Royal Statistical Society. Series B
(Methodological), 34(2):187?220, 1972.
[4] Maria De Iorio, Wesley O Johnson, Peter M?ller, and Gary L Rosner. Bayesian nonparametric
nonproportional hazards survival modeling. Biometrics, 65(3):762?771, 2009.
[5] Kjell Doksum. Tailfree and neutral random probabilities and their posterior distributions. The
Annals of Probability, pages 183?201, 1974.
[6] David K Duvenaud, Hannes Nickisch, and Carl E Rasmussen. Additive gaussian processes. In
Advances in neural information processing systems, pages 226?234, 2011.
[7] RL Dykstra and Purushottam Laud. A bayesian nonparametric approach to reliability. The
Annals of Statistics, pages 356?367, 1981.
[8] Thomas S Ferguson. A bayesian analysis of some nonparametric problems. The annals of
statistics, pages 209?230, 1973.
[9] Nils Lid Hjort, Chris Holmes, Peter M?ller, and Stephen G Walker. Bayesian nonparametrics,
volume 28. Cambridge University Press, 2010.
[10] Hemant Ishwaran, Udaya B Kogalur, Eugene H Blackstone, and Michael S Lauer. Random
survival forests. The annals of applied statistics, pages 841?860, 2008.
[11] Heikki Joensuu, Peter Reichardt, Mikael Eriksson, Kirsten Sundby Hall, and Aki Vehtari.
Gastrointestinal stromal tumor: a method for optimizing the timing of ct scans in the follow-up
of cancer patients. Radiology, 271(1):96?106, 2013.
[12] Heikki Joensuu, Aki Vehtari, Jaakko Riihim?ki, Toshirou Nishida, Sonja E Steigen, Peter
Brabec, Lukas Plank, Bengt Nilsson, Claudia Cirilli, Chiara Braconi, et al. Risk of recurrence
of gastrointestinal stromal tumour after surgery: an analysis of pooled population-based cohorts.
The lancet oncology, 13(3):265?274, 2012.
[13] Edward L Kaplan and Paul Meier. Nonparametric estimation from incomplete observations.
Journal of the American statistical association, 53(282):457?481, 1958.
[14] Sara Martino, Rupali Akerkar, and H?vard Rue. Approximate bayesian inference for survival
models. Scandinavian Journal of Statistics, 38(3):514?528, 2011.
[15] Iain Murray and Ryan P Adams. Slice sampling covariance hyperparameters of latent gaussian
models. In Advances in Neural Information Processing Systems, pages 1732?1740, 2010.
[16] Iain Murray, Ryan Prescott Adams, and David JC MacKay. Elliptical slice sampling. In
AISTATS, volume 13, pages 541?548, 2010.
[17] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances
in neural information processing systems, pages 1177?1184, 2007.
[18] Vinayak Rao and Yee W. Teh. Gaussian process modulated renewal processes. In Advances in
Neural Information Processing Systems, pages 2474?2482, 2011.
[19] Terry M Therneau and Thomas Lumley. Package ?survival?, 2015.
[20] Stephen Walker and Pietro Muliere. Beta-stacy processes and a generalization of the p?lya-urn
scheme. The Annals of Statistics, pages 1762?1780, 1997.
9
| 6443 |@word trial:3 cox:12 middle:2 version:1 inversion:3 seems:3 proportionality:1 covariance:2 p0:3 rivera:3 harder:1 initial:4 ndez:2 contains:2 score:59 series:1 denoting:1 current:2 elliptical:2 riihim:1 si:1 additive:2 numerical:2 drop:1 treating:1 update:4 stationary:5 generative:2 beginning:3 chile:1 accepting:1 provides:1 location:2 sigmoidal:1 direct:1 beta:2 supply:1 incorrect:2 consists:1 thinned:1 introduce:3 expected:1 indeed:2 p1:3 nor:2 detects:1 gastrointestinal:2 considering:1 increasing:3 spain:1 provided:1 moreover:2 notation:1 estimating:1 lowest:1 what:1 interpreted:2 weibull:9 developed:2 supplemental:2 pseudo:1 remember:1 ti:38 exactly:2 uk:6 normally:1 grant:1 arguably:2 positive:1 t1:2 before:2 timing:1 treat:1 tends:1 hemant:1 oxford:4 path:1 might:2 studied:1 suggests:1 sara:1 co:1 range:1 bjk:4 acknowledgment:1 yj:3 union:1 procedure:1 survived:1 reject:1 refers:2 prescott:2 suggest:1 get:2 cannot:2 close:3 eriksson:1 context:3 impossible:2 risk:3 yee:2 map:1 joensuu:2 missing:1 go:2 starting:1 duration:1 focused:1 stats:2 estimator:5 iain:3 holmes:1 his:1 population:2 handle:6 variation:1 annals:5 imagine:1 suppose:1 construction:3 user:1 exact:1 massive:2 homogeneous:3 us:4 carl:1 agreement:1 trick:1 crossing:2 expensive:1 particularly:1 updating:1 preprint:1 indep:1 vehtari:2 benjamin:1 pd:4 ui:2 covariates:37 ti0:4 jaakko:1 ali:1 distinctive:1 creates:1 completely:1 iorio:1 easily:4 joint:1 k0:2 instantiated:1 kcl:1 fast:1 london:2 amend:1 describe:1 tell:1 hyper:1 refined:1 quite:1 larger:2 otherwise:1 statistic:8 gi:12 g1:3 kirsten:1 gp:4 jointly:1 noisy:5 radiology:1 sequence:1 propose:1 interaction:3 j2:5 achieve:1 thatr:1 exploiting:1 regularity:1 amends:1 produce:1 adam:3 help:1 develop:1 ac:3 received:1 edward:1 strong:2 come:3 indicate:1 muliere:1 drawback:1 material:2 sjk:3 require:1 behaviour:1 fix:1 generalization:1 proposition:10 ryan:3 hold:1 practically:1 around:1 considered:3 therapy:2 normal:1 exp:1 visually:1 duvenaud:1 mapping:1 predict:1 hall:1 smallest:1 estimation:5 coolen:1 council:1 create:1 successfully:1 establishes:1 survives:2 gaussian:34 aim:1 avoid:3 she:1 methodological:1 modelling:1 likelihood:4 indicates:1 maria:1 martino:1 baseline:10 detect:1 inference:15 dependent:1 ferguson:1 unlikely:1 accept:2 relation:3 going:1 overall:5 classification:1 flexible:3 plank:1 renewal:1 mackay:2 aware:1 construct:2 sampling:7 future:1 report:3 overestimating:1 few:3 gamma:6 therneau:1 acceptance:2 weakness:1 tj:3 kt:1 lims:1 integral:1 censored:6 respective:2 biometrics:1 tree:1 incomplete:3 instance:2 classify:1 column:1 modeling:1 rao:1 vinayak:1 neutral:2 uniform:1 seventh:1 johnson:1 censorship:6 synthetic:5 nickisch:1 vard:1 recht:1 density:5 international:1 fundamental:1 randomized:1 sensitivity:1 akj:1 informatics:1 michael:1 gjm:3 squared:2 augmentation:3 satisfied:1 opposed:1 choose:8 dr:1 expert:6 american:1 leading:1 rescaling:2 concordance:2 de:1 centred:8 rsf:3 ywt:1 pooled:1 coefficient:1 jc:2 fernandez:1 ranking:1 multiplicative:2 helped:1 later:1 try:1 performed:1 observing:1 lung:1 option:2 complicated:1 square:1 ni:7 became:1 variance:5 characteristic:1 efficiently:1 correspond:1 modelled:2 bayesian:12 fern:2 iid:1 cc:1 doksum:1 failure:3 proces:1 tamara:2 james:1 naturally:1 associated:5 proof:1 sampled:1 proved:2 dataset:2 popular:3 ask:1 treatment:12 recall:1 knowledge:8 exemplify:1 thinning:1 appears:1 wesley:1 dt:1 follow:2 improved:1 hannes:1 nonparametrics:1 evaluated:1 though:2 ox:2 done:2 furthermore:1 just:6 rejected:9 lastly:1 until:2 d:5 working:2 sketch:1 lack:1 nonparametrically:1 defines:2 effect:1 contain:1 true:3 hence:4 analytically:1 death:2 sgp:12 deal:3 attractive:1 sin:1 recurrence:1 aki:2 claudia:1 whye:1 demonstrate:1 tn:2 performs:1 recently:2 funding:2 common:3 imputing:2 rl:1 volume:2 discussed:2 interpretation:2 approximates:1 association:1 refer:3 significant:1 cambridge:1 imposing:1 ai:4 credibility:1 rd:1 tuning:1 pm:1 inclusion:1 erc:1 dot:1 reliability:1 scandinavian:1 add:1 something:1 posterior:4 purushottam:1 optimizing:1 scenario:1 bkj:1 binary:1 life:1 g0m:1 seen:3 impose:1 lya:1 ller:2 semi:1 branch:1 stephen:2 mix:1 desirable:1 rahimi:1 smooth:1 faster:1 clinical:2 cross:3 hazard:39 prediction:2 basic:2 regression:2 patient:8 poisson:15 arxiv:2 iteration:2 kernel:25 limt:1 represent:1 rosner:1 proposal:1 addition:2 semiparametric:2 want:1 whereas:1 interval:8 receive:1 walker:2 extra:1 specially:1 lauer:1 heikki:2 incorporates:2 practitioner:1 call:1 presence:3 hjort:1 cohort:1 easy:1 variety:1 xj:6 competing:3 perfectly:1 idea:2 computable:1 expression:1 colour:1 peter:4 reformulated:1 proceed:2 remark:1 useful:2 clear:2 pleasant:1 nonparametric:13 extensively:1 simplest:1 generate:3 notice:4 nishida:1 per:2 correctly:1 diagnosis:3 hyperparameter:1 shall:1 group:2 putting:2 nevertheless:6 drawn:1 imputation:1 anova:11 clean:2 vast:1 pietro:1 sum:1 run:4 inverse:3 package:3 wer:1 place:2 reader:1 dy:1 flavour:1 bit:1 bound:1 ct:1 ki:1 ddp:10 fold:2 annual:1 badly:1 constraint:2 incorporation:1 infinity:2 constrain:1 boxplot:1 generates:1 fourier:2 aspect:1 simulate:2 urn:1 moth:1 department:3 according:1 combination:2 conjugate:2 describes:1 slightly:1 across:2 metropolis:1 lid:1 happens:1 nilsson:1 restricted:1 computationally:1 equation:5 conjugacy:1 mechanism:1 fail:1 needed:1 know:4 tractable:6 fp7:1 operation:2 ishwaran:1 observe:10 hierarchical:1 away:1 appropriate:1 simulating:1 alternative:2 encounter:1 thomas:2 denotes:3 dirichlet:2 include:4 ensure:1 instant:2 mikael:1 giving:2 murray:3 veteran:2 society:1 dykstra:1 surgery:1 objective:1 parametric:6 dependence:2 rt:2 tumour:1 link:1 uncensored:2 chris:1 trivial:1 reason:1 assuming:1 length:6 code:1 index:12 difficult:1 gk:7 negative:1 kaplan:3 implementation:2 proper:1 unemployment:1 perform:5 teh:3 upper:1 observation:1 datasets:3 finite:1 truncated:1 extended:1 incorporated:1 precise:1 oncology:1 intensity:12 introduced:1 david:3 meier:3 mechanical:1 pair:8 barcelona:1 nip:1 beyond:1 able:2 usually:2 regime:1 interpretability:1 royal:1 terry:1 event:3 suitable:1 natural:1 difficulty:1 predicting:1 indicator:2 scheme:10 axis:1 kj:7 reichardt:1 prior:22 literature:1 nice:1 eugene:1 nicol:2 multiplication:2 fully:1 expect:1 proportional:9 age:3 validation:1 sufficient:1 imposes:1 lancet:1 row:4 censoring:15 cancer:2 repeat:2 last:1 supported:1 rasmussen:1 wide:1 taking:1 stacy:2 lukas:1 distributed:3 slice:3 curve:8 default:1 xn:1 valid:1 cumulative:2 author:1 made:2 jump:6 programme:1 approximate:4 observable:3 status:1 unnecessary:1 xi:14 continuous:2 latent:2 table:1 additionally:2 nature:2 nonhomogeneous:1 nicolas:1 obtaining:1 forest:5 complex:2 european:2 anthony:1 rue:1 aistats:1 main:2 rh:1 whole:4 noise:5 hyperparameters:3 paul:1 repeated:1 x1:1 slow:1 precision:1 exponential:9 lie:1 admissible:3 theorem:1 down:1 covariate:9 showing:1 barrett:1 survival:65 exists:2 consist:1 adding:2 sx:1 easier:1 rejection:2 simply:1 sonja:1 chiara:1 ordered:1 g2:2 corresponds:3 gary:1 determines:1 acm:1 stromal:2 conditional:1 sorted:1 month:2 king:1 viewed:1 ajk:4 hard:2 experimentally:1 change:1 generalisation:1 except:1 sampler:2 tumor:1 called:8 nil:1 accepted:6 experimental:1 indicating:2 college:1 support:1 latter:1 scan:1 modulated:1 accelerated:1 incorporate:5 evaluate:2 mcmc:2 |
6,018 | 6,444 | Variational Information Maximization for
Feature Selection
Shuyang Gao
Greg Ver Steeg
Aram Galstyan
University of Southern California, Information Sciences Institute
gaos@usc.edu, gregv@isi.edu, galstyan@isi.edu
Abstract
Feature selection is one of the most fundamental problems in machine learning.
An extensive body of work on information-theoretic feature selection exists which
is based on maximizing mutual information between subsets of features and class
labels. Practical methods are forced to rely on approximations due to the difficulty
of estimating mutual information. We demonstrate that approximations made by
existing methods are based on unrealistic assumptions. We formulate a more flexible and general class of assumptions based on variational distributions and use
them to tractably generate lower bounds for mutual information. These bounds
define a novel information-theoretic framework for feature selection, which we
prove to be optimal under tree graphical models with proper choice of variational
distributions. Our experiments demonstrate that the proposed method strongly
outperforms existing information-theoretic feature selection approaches.
1
Introduction
Feature selection is one of the fundamental problems in machine learning research [1, 2]. Its problematic issues include a large number of features that are either irrelevant or redundant for the task at
hand. In these cases, it is often advantageous to pick a smaller subset of features to avoid over-fitting,
to speed up computation, or simply to improve the interpretability of the results.
Feature selection approaches are usually categorized into three groups: wrapper, embedded and
filter [3, 4, 5]. The first two methods, wrapper and embedded, are considered classifier-dependent,
i.e., the selection of features somehow depends on the classifier being used. Filter methods, on the
other hand, are classifier-independent and define a scoring function between features and labels in
the selection process.
Because filter methods may be employed in conjunction with a wide variety of classifiers, it is important that the scoring function of these methods is as general as possible. Since mutual information
(MI) is a general measure of dependence with several unique properties [6], many MI-based scoring
functions have been proposed as filter methods [7, 8, 9, 10, 11, 12]; see [5] for an exhaustive list.
Owing to the difficulty of estimating mutual information in high dimensions, most existing MI-based
feature selection methods are based on various low-order approximations for mutual information.
While those approximations have been successful in certain applications, they are heuristic in nature
and lack theoretical guarantees. In fact, as we demonstrate in Sec. 2.2, a large family of approximate
methods are based on two assumptions that are mutually inconsistent.
To address the above shortcomings, in this paper we introduce a novel feature selection method
based on a variational lower bound on mutual information; a similar bound was previously studied
within the Infomax learning framework [13]. We show that instead of maximizing the mutual information, which is intractable in high dimensions (hence the introduction of many heuristics), we can
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
maximize a lower bound on the MI with the proper choice of tractable variational distributions. We
use this lower bound to define an objective function and derive a forward feature selection algorithm.
We provide a rigorous proof that the forward feature selection is optimal under tree graphical models
by choosing an appropriate variational distribution. This is in contrast with previous informationtheoretic feature selection methods which lack any performance guarantees. We also conduct empirical validation on various datasets and demonstrate that the proposed approach outperforms stateof-the-art information-theoretic feature selection methods.
In Sec. 2 we introduce general MI-based feature selection methods and discuss their limitations.
Sec. 3 introduces the variational lower bound on mutual information and proposes two specific variational distributions. In Sec. 4, we report results from our experiments, and compare the proposed
approach with existing methods.
2
2.1
Information-Theoretic Feature Selection Background
Mutual Information-Based Feature Selection
Consider a supervised learning scenario where x = {x1 , x2 , ..., xD } is a D-dimensional input feature vector, and y is the output label. In filter methods, the mutual information-based feature selection task is to select T features xS ? = {xf1 , xf2 , ..., xfT } such that the mutual information between
xS ? and y is maximized. Formally,
S ? = arg max I (xS : y) s.t. |S| = T
S
(1)
where I(?) denotes the mutual information [6].
Forward Sequential Feature Selection
Maximizing the objective function in Eq. 1 is generally
NP-hard. Many MI-based feature selection methods adopt a greedy method, where features are
selected incrementally, one feature at a time. Let S t 1 = {xf1 , xf2 , ..., xft 1 } be the selected
feature set after time step t 1. According to the greedy method, the next feature ft at step t is
selected such that
ft = arg max I (xS t 1 [i : y)
(2)
i2S
/ t
1
where xS t 1 [i denotes x?s projection into the feature space S t
information term in Eq. 2 can be decomposed as:
I (xS t
1 [i
1
[ i. As shown in [5], the mutual
: y) = I (xS t 1 : y) + I (xi : y|xS t 1 )
= I (xS t 1 : y) + I (xi : y) I (xi : xS t 1 ) + I (xi : xS t 1 |y)
= I (xS t 1 : y) + I (xi : y)
(H (xS t 1 ) H (xS t 1 |xi )) + (H (xS t 1 |y) H (xS t 1 |xi , y))
(3)
where H(?) denotes the entropy [6]. Omitting the terms that do not depend on xi in Eq. 3, we can
rewrite Eq. 2 as follows:
ft = arg max I (xi : y) + H (xS t 1 |xi )
i2S
/ t
1
H (xS t 1 |xi , y)
(4)
The greedy learning algorithm has been analyzed in [14].
2.2
Limitations of Previous MI-Based Feature Selection Methods
Estimating high-dimensional information-theoretic quantities is a difficult task. Therefore,
most MI-based feature selection methods propose low-order approximation to H (xS t 1 |xi ) and
H (xS t 1 |xi , y) in Eq. 4. A general family of methods rely on the following approximations [5]:
H (xS t 1 |xi ) ?
t 1
X
k=1
H (xS t 1 |xi , y) ?
H (xfk |xi )
t 1
X
k=1
2
H (xfk |xi , y)
(5)
The approximations in Eq. 5 become exact under the following two assumptions [5]:
Assumption 1. (Feature Independence Assumption) p (xS t 1 |xi ) =
tQ1
k=1
p (xfk |xi )
Assumption 2. (Class-Conditioned Independence Assumption) p (xS t 1 |xi , y) =
tQ1
k=1
p (xfk |xi , y)
Assumption 1 and Assumption 2 mean that the selected features are independent and classconditionally independent, respectively, given the unselected feature xi under consideration.
Assumption 1
Assumption 2
Satisfying both Assumption 1 and
Assumption 2
Figure 1: The first two graphical models show the assumptions of traditional MI-based feature selection methods. The third graphical model shows a scenario when both Assumption 1 and Assumption
2 are true. Dashed line indicates there may or may not be a correlation between two variables.
We now demonstrate that the two assumptions cannot be valid simultaneously unless the data has
a very specific (and unrealistic) structure. Indeed, consider the graphical models consistent with
either assumption, as illustrated in Fig. 1. If Assumption 1 holds true, then xi is the only common
cause of the previously selected features S t 1 = {xf1 , xf2 , ..., xft 1 }, so that those features become
independent when conditioned on xi . On the other hand, if Assumption 2 holds, then the features
depend both on xi and class label y; therefore, generally speaking, distribution over those features
does not factorize by solely conditioning on xi ?there will be remnant dependencies due to y. Thus,
if Assumption 2 is true, then Assumption 1 cannot be true in general, unless the data is generated
according to a very specific model shown in the rightmost model in Fig. 1. Note, however, that in
this case, xi becomes the most important feature because I(xi : y) > I(xS t 1 : y); then we should
have selected xi at the very first step, contradicting the feature selection process.
As we mentioned above, most existing methods implicitly or explicitly adopt both assumptions or
their stronger versions, as shown in [5]?including mutual information maximization (MIM) [15],
joint mutual information (JMI) [8], conditional mutual information maximization (CMIM) [9],
maximum relevance minimum redundancy (mRMR) [10], conditional Infomax feature extraction (CIFE) [16], etc. Approaches based on global optimization of mutual information, such as
quadratic programming feature selection (QPFS) [11] and the state-of-the-art conditional mutual
information-based spectral method (SPECCMI ) [12], are derived from the previous greedy methods
and therefore also implicitly rely on those two assumptions.
In the next section we address these issues by introducing a novel information-theoretic framework
for feature selection. Instead of estimating mutual information and making mutually inconsistent
assumptions, our framework formulates a tractable variational lower bound on mutual information,
which allows a more flexible and general class of assumptions via appropriate choices of variational
distributions.
3
3.1
Method
Variational Mutual Information Lower Bound
Let p(x, y) be the joint distribution of input (x) and output (y) variables. Barber & Agkov [13]
derived the following lower bound for mutual information I(x : y) by using the non-negativity of
P
KL-divergence, i.e., x p (x|y) log p(x|y)
0 gives:
q(x|y)
I (x : y)
H (x) + hln q (x|y)ip(x,y)
3
(6)
where angled brackets represent averages and q(x|y) is an arbitrary variational distribution. This
bound becomes exact if q(x|y) ? p(x|y).
It is worthwhile to note that in the context of unsupervised representation learning, p(y|x) and
q(x|y) can be viewed as an encoder and a decoder, respectively. In this case, y needs to be learned
by maximizing the lower bound in Eq. 6 by iteratively adjusting the parameters of the encoder and
decoder, such as [13, 17].
3.2
Variational Information Maximization for Feature Selection
Naturally, in terms of information-theoretic feature selection, we could also try to optimize the
variational lower bound in Eq. 6 by choosing a subset of features S ? in x, such that,
n
o
S ? = arg max H (xS ) + hln q (xS |y)ip(xS ,y)
(7)
S
However, the H(xS ) term in RHS of Eq. 7 is still intractable when xS is very high-dimensional.
Nonetheless, by noticing that variable y is the class label, which is usually discrete, and hence H(y)
is fixed and tractable, by symmetry we switch x and y in Eq. 6 and rewrite the lower bound as
follows:
? ?
?
q (y|x)
I (x : y) H (y) + hln q (y|x)ip(x,y) = ln
(8)
p (y)
p(x,y)
The equality in Eq. 8 is obtained by noticing that H(y) = h ln p (y)ip(y) .
By using Eq. 8, the lower bound optimal subset S ? of x becomes:
(? ?
)
?
q
(y|x
)
S
S ? = arg max
ln
p (y)
S
p(xS ,y)
3.2.1
(9)
Choice of Variational Distribution
q(y|xS ) in Eq. 9 can be any distribution as long as it is normalized. We need to choose q(y|xS ) to
be as general as possible while still keeping the term hln q (y|xS )ip(xS ,y) tractable in Eq. 9.
As a result, we set q(y|xS ) as
q (y|xS ) =
q (xS , y)
q (xS |y) p (y)
=P
q (xS )
q (xS |y0 ) p (y0 )
(10)
y0
We can verify that Eq. 10 is normalized even if q(xS |y) is not normalized.
If we further denote,
q (xS ) =
X
y0
q (xS |y0 ) p (y0 )
then by combining Eqs. 9 and 10, we get,
? ?
?
q (xS |y)
I (xS : y)
ln
q (xS )
p(xS ,y)
? ILB (xS : y)
(11)
(12)
And we also have the following equation which shows the gap between I(xS : y) and ILB (xS : y),
I (xS : y)
ILB (xS : y) = hKL (p (y|xS ) ||q (y|xS ))ip(xS )
(13)
Auto-Regressive Decomposition.
Now that q(y|xS ) is defined, all we need to do is model
q(xS |y) under Eq. 10, and q(xS ) is easy to compute based on q(xS |y). Here we decompose
q(xS |y) as an auto-regressive distribution assuming T features in S:
q (xS |y) = q (xf1 |y)
T
Y
t=2
4
q (xft |xf<t , y)
(14)
Figure 2: Auto-regressive decomposition for q(xS |y)
where xf<t denotes {xf1 , xf2 , ..., xft 1 }. The graphical model in Fig. 2 demonstrates this decomposition. The main advantage of this model is that it is well-suited for the forward feature selection
procedure where one feature is selected at a time (which we will explain in Sec. 3.2.3). And if
q (xft |xf<t , y) is tractable, then so is the whole distribution q(xS |y). Therefore, we would find
tractable Q-distributions over q (xft |xf<t , y). Below we illustrate two such Q-distributions.
Naive Bayes Q-distribution.
variables given y, i.e.,
A natural idea would be to assume xt is independent of other
(15)
q (xft |xf<t , y) = p (xft |y)
Then the variational distribution q(y|xS ) can be written based on Eqs. 10 and 15 as follows:
Q
p (y)
p (xj |y)
j2S
Q
q (y|xS ) = P
p (y0 )
p (xj |y0 )
y0
(16)
j2S
And we also have the following theorem:
Theorem 3.1 (Exact Naive Bayes). Under Eq. 16, the lower bound inQ
Eq. 8 becomes exact if and
only if data is generated by a Naive Bayes model, i.e., p (x, y) = p (y) p (xi |y).
i
The proof for Theorem 3.1 becomes obvious by using the mutual information definition. Note that
the most-cited MI-based feature selection method mRMR [10] also assumes conditional independence given the class label y as shown in [5, 18, 19], but they make additional stronger independence
assumptions among only feature variables.
Pairwise Q-distribution.
Naive Bayes distribution:
We now consider an alternative approach that is more general than the
tY
1
q (xft |xf<t , y) =
i=1
p (xft |xfi , y)
! t 11
(17)
In Eq. 17, we assume q (xft |xf<t , y) to be the geometric mean of conditional distributions
q(xft |xfi , y). This assumption is tractable as well as reasonable because if the data is generated by a Naive Bayes model, the lower bound in Eq. 8 also becomes exact using Eq. 17 due to
p (xft |xfi , y) ? p (xft |y) in that case.
3.2.2
Estimating Lower Bound From Data
Assuming either Naive Bayes Q-distribution or pairwise Q-distribution, it is convenient to estimate
q(xS |y) and q(xS ) in Eq. 12 by using plug-in probability estimators for discrete data or one/twodimensional density estimators for continuous data. We also use the sample mean to approximate
the expectation term in Eq. 12. Our final estimator for ILB (xS : y) is written as follows:
?
?
(k) (k)
q
b
x
|y
X
S
1
?
?
IbLB (xS : y) =
ln
(18)
(k)
N (k) (k)
qb x
x
,y
S
where x(k) , y(k) are samples from data, and qb(?) denotes the estimate for q(?).
5
3.2.3
Variational Forward Feature Selection Under Auto-Regressive Decomposition
After defining q(y|xS ) in Eq. 10 and auto-regressive decomposition of q(xS |y) in Eq. 15, we are
able to do the forward feature selection previously described in Eq. 2, but replace the mutual information with its lower bound IbLB . Recall that S t 1 is the set of selected features after step t 1,
then the feature ft will be selected at step t such that
ft = arg max IbLB (xS t
i2S
/ t
1
1 [i
: y)
(19)
where IbLB (xS t 1 [i : y) can be obtained from IbLB (xS t 1 : y) recursively by auto-regressive decomposition q (xS t 1 [i |y) = q (xS t 1 |y) q (xi |xS t 1 , y) where q (xS t 1 |y) is stored at step t 1.
This forward feature selection can be done under auto-regressive decomposition in Eqs. 10 and 14
for any Q-distribution. However, calculating q(xi |xS t , y) may vary according to different Qdistributions. We can verify that it is easy to get q(xi |xS t , y) recursively from q(xi |xS t 1 , y) under
Naive Bayes or pairwise Q-distribution. We call our algorithm under these two Q-distributions
VMI naive and VMI pairwise respectively.
It is worthwhile noting that the lower bound does not always increase at each step. A decrease in
lower bound at step t indicates that the Q-distribution would approximate the underlying distribution worse than it did at previous step t 1. In this case, the algorithm would re-maximize the
lower bound from zero with only the remaining unselected features. We summarize the concrete
implementation of our algorithms in supplementary Sec. A.
Time Complexity.
Although our algorithm needs to calculate the distributions at each step,
we only need to calculate the probability value at each sample point. For both VMI naive and
VMI pairwise , the total computational complexity is O(N DT ) assuming N as number of samples,
D as total number of features, T as number of final selected features. The detailed time analysis is
left for the supplementary Sec. A. As shown in Table 1, our methods VMI naive and VMI pairwise
have the same time complexity as mRMR [10], while the state-of-the-art global optimization method
SPECCMI [12] is required to precompute the pairwise mutual information matrix, which gives a
time complexity of O(N D2 ).
Table 1: Time complexity in number of features D, selected number of features d, and number
of samples N
Method
mRMR VMI naive VMI pairwise SPECCMI
Complexity O(N DT ) O(N DT )
O(N DT )
O(N D2 )
Optimality Under Tree Graphical Models. Although our method VMI naive assumes a Naive
Bayes model, we can prove that this method is still optimal if the data is generated according to
tree graphical models. Indeed, both of our methods, VMI naive and VMI pairwise , will always
prioritize the first layer features, as shown in Fig. 3. This optimality is summarized in Theorem B.1
in supplementary Sec. B.
4
Experiments
Synthetic Data.
We begin with the experiments on a synthetic model according to the tree
structure illustrated in the left part of Fig. 3. The detailed data generating process is shown in
supplementary section D. The root node Y is a binary variable, while other variables are continuous.
We use VMI naive to optimize the lower bound ILB (x : y). 5000 samples are used to generate the
synthethic data, and variational Q-distributions are estimated by the kernel density estimator. We
can see from the plot in the right-hand part of Fig. 3 that our algorithm, VMI naive , selects x1 , x2 ,
x3 as the first three features, although x2 and x3 are only weakly correlated with y. If we continue
to add deeper level features {x4 , ..., x9 }, the lower bound will decrease. For comparison, we also
illustrate the mutual information between each single feature xi and y in Table 2. We can see from
Table 2 that it would choose x1 , x4 and x5 as the top three features by using the maximum relevance
criteria [15].
6
Figure 3: (Left) This is the generative model used for synthetic experiments. Edge thickness represents the relationship strength. (Right) Optimizing the lower bound by VMI naive . Variables under
the blue line denote the features selected at each step. Dotted blue line shows the decreasing lower
bound if adding more features. Ground-truth mutual information is obtained using N = 100, 000
samples.
featurei
x1
x2
x3
x4
x5
x6
x7
x8
x9
I(xi : y) 0.111 0.052 0.022 0.058 0.058 0.025 0.029 0.012 0.013
Table 2: Mutual information between label y and each feature xi for Fig. 3. I(xi : y) is estimated
using N=100,000 samples. Top three variables with highest mutual information are highlighted in
bold.
Real-World Data.
We compare our algorithms VMI naive and VMI pairwise with other popular information-theoretic feature selection methods, including mRMR [10], JMI [8], MIM [15],
CMIM [9], CIFE [16], and SPECCMI [12]. We use 17 well-known datasets in previous feature
selection studies [5, 12] (all data are discretized). The dataset summaries are illustrated in supplementary Sec. C. We use the average cross-validation error rate on the range of 10 to 100 features to
compare different algorithms under the same setting as [12]. Tenfold cross-validation is employed
for datasets with number of samples N
100 and leave-one-out cross-validation otherwise. The
3-nearest-neighbor classifier is used for Gisette and Madelon, following [5]. For the remaining
datasets, the chosen classifier is Linear SVM, following [11, 12].
The experimental results can be seen in Table 3.1 The entries with ? and ?? indicate the best performance and the second best performance, respectively (in terms of average error rate). We also use
the paired t-test at 5% significant level to test the hypothesis that VMI naive or VMI pairwise perform significantly better than other methods, or vice visa. Overall, we find that both of our methods,
VM Inaive and VM Ipairwise , strongly outperform other methods. This indicates that our variational
feature selection framework is a promising addition to the current literature of information-theoretic
feature selection.
Figure 4: Number of selected features versus average cross-validation error in datasets Semeion and
Gisette.
1
We omit the results for M IM and CIF E due to space limitations. The complete results are shown in the
supplementary Sec. C.
7
Table 3: Average cross-validation error rate comparison of VMI against other methods. The
last two lines indicate win(W)/tie (T)/ loss(L) for VMI naive and VMI pairwise respectively.
Dataset
Lung
Colon
Leukemia
Lymphoma
Splice
Landsat
Waveform
KrVsKp
Ionosphere
Semeion
Multifeat.
Optdigits
Musk2
Spambase
Promoter
Gisette
Madelon
#W1 /T1 /L1 :
#W2 /T2 /L2 :
mRMR
JMI
10.9?(4.7)?? 11.6?(4.7)
19.7?(2.6)
17.3?(3.0)
0.4?(0.7)
1.4?(1.2)
5.6?(2.8)
6.6?(2.2)
13.6?(0.4)? 13.7?(0.5)??
19.5?(1.2)
18.9?(1.0)
15.9?(0.5)? 15.9?(0.5)?
5.1?(0.7)??
5.2?(0.6)
12.8?(0.9)
16.6?(1.6)
23.4?(6.5)
24.8?(7.6)
4.0?(1.6)
4.0?(1.6)
7.6?(3.3)
7.6?(3.2)
12.4?(0.7)? 12.8?(0.7)
6.9?(0.7)
7.0?(0.8)
21.5?(2.8)
22.4?(4.0)
5.5?(0.9)
5.9?(0.7)
30.8?(3.8) 15.3?(2.6)?
11/4/2
10/6/1
9/6/2
9/6/2
CMIM
SPECCMI
11.4?(3.0)
11.6?(5.6)
18.4?(2.6)
16.1?(2.0)
1.1?(2.0)
1.8?(1.3)
8.6?(3.3)
12.0?(6.6)
14.7?(0.3) 13.7?(0.5)??
19.1?(1.1)
21.0?(3.5)
16.0?(0.7) 15.9?(0.6)??
5.3?(0.5)
5.1?(0.6)?
13.1?(0.8)
16.8?(1.6)
16.3?(4.4)
26.0?(9.3)
3.6?(1.2)
4.8?(3.0)
7.5?(3.4)??
9.2?(6.0)
13.0?(1.0)
15.1?(1.8)
6.8?(0.7)??
9.0?(2.3)
22.1?(2.9)
24.0?(3.7)
5.1?(1.3)
7.1?(1.3)
17.4?(2.6) 15.9?(2.5)??
10/7/0
13/2/2
13/3/1
12/3/2
VMI naive
7.4?(3.6)?
11.2?(2.7)?
0.0?(0.1)?
3.7?(1.9)?
13.7?(0.5)??
18.8?(0.8)?
15.9?(0.6)??
5.3?(0.5)
12.7?(1.9)??
14.0?(4.0)?
3.0?(1.1)?
7.2?(2.5)?
12.8?(0.6)
6.6?(0.3)?
21.2?(3.9)??
4.8?(0.9)??
16.7?(2.7)
VMI pairwise
14.5?(6.0)
11.9?(1.7)??
0.2?(0.5)??
5.2?(3.1)??
13.7?(0.5)??
18.8?(1.0)??
15.9?(0.5)?
5.1?(0.7)??
12.0?(1.0)?
14.5?(3.9)??
3.5?(1.1)??
7.6?(3.6)
12.6?(0.5)??
6.6?(0.3)?
20.4?(3.1)?
4.2?(0.8)?
16.6?(2.9)
We also plot the average cross-validation error with respect to number of selected features. Fig. 4
shows the two most distinguishable data sets, Semeion and Gisette. We can see that both of our
methods, VMI N aive and VMI pairwise , have lower error rates in these two data sets.
5
Related Work
There has been a significant amount of work on information-theoretic feature selection in the past
twenty years: [5, 7, 8, 9, 10, 15, 11, 12, 20], to name a few. Most of these methods are based on
combinations of so-called relevant, redundant and complimentary information. Such combinations
representing low-order approximations of mutual information are derived from two assumptions,
and it has proved unrealistic to expect both assumptions to be true. Inspired by group testing [21],
more scalable feature selection methods have been developed, but thos methods also require the
calculation of high-dimensional mutual information as a basic scoring function.
Estimating mutual information from data requires a large number of observations?especially when
the dimensionality is high. The proposed variational lower bound can be viewed as a way of estimating mutual information between a high-dimensional continuous variable and a discrete variable.
Only a few examples exist in literature [22] under this setting. We hope our method will shed light
on new ways to estimate mutual information, similar to estimating divergences in [23].
6
Conclusion
Feature selection has been a significant endeavor over the past decade. Mutual information gives
a general basis for quantifying the informativeness of features. Despite the clarity of mutual information, estimating it can be difficult. While a large number of information-theoretic methods
exist, they are rather limited and rely on mutually inconsistent assumptions about underlying data
distributions. We introduced a unifying variational mutual information lower bound to address these
issues and showed that by auto-regressive decomposition, feature selection can be done in a forward
manner by progressively maximizing the lower bound. We also presented two concrete methods
using Naive Bayes and pairwise Q-distributions, which strongly outperform the existing methods.
VMI naive only assumes a Naive Bayes model, but even this simple model outperforms the existing
information-theoretic methods, indicating the effectiveness of our variational information maximization framework. We hope that our framework will inspire new mathematically rigorous algorithms
for information-theoretic feature selection, such as optimizing the variational lower bound globally
and developing more powerful variational approaches for capturing complex dependencies.
8
References
[1] Manoranjan Dash and Huan Liu. Feature selection for classification. Intelligent data analysis, 1(3):131?
156, 1997.
[2] Huan Liu and Hiroshi Motoda. Feature selection for knowledge discovery and data mining, volume 454.
Springer Science & Business Media, 2012.
[3] Ron Kohavi and George H John. Wrappers for feature subset selection. Artificial intelligence, 97(1):273?
324, 1997.
[4] Isabelle Guyon and Andr?e Elisseeff. An introduction to variable and feature selection. The Journal of
Machine Learning Research, 3:1157?1182, 2003.
[5] Gavin Brown, Adam Pocock, Ming-Jie Zhao, and Mikel Luj?an. Conditional likelihood maximisation: a
unifying framework for information theoretic feature selection. The Journal of Machine Learning Research, 13(1):27?66, 2012.
[6] Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
[7] Roberto Battiti. Using mutual information for selecting features in supervised neural net learning. Neural
Networks, IEEE Transactions on, 5(4):537?550, 1994.
[8] Howard Hua Yang and John E Moody. Data visualization and feature selection: New algorithms for
nongaussian data. In NIPS, volume 99, pages 687?693. Citeseer, 1999.
[9] Franc?ois Fleuret. Fast binary feature selection with conditional mutual information. The Journal of
Machine Learning Research, 5:1531?1555, 2004.
[10] Hanchuan Peng, Fuhui Long, and Chris Ding. Feature selection based on mutual information criteria of
max-dependency, max-relevance, and min-redundancy. Pattern Analysis and Machine Intelligence, IEEE
Transactions on, 27(8):1226?1238, 2005.
[11] Irene Rodriguez-Lujan, Ramon Huerta, Charles Elkan, and Carlos Santa Cruz. Quadratic programming
feature selection. The Journal of Machine Learning Research, 11:1491?1516, 2010.
[12] Xuan Vinh Nguyen, Jeffrey Chan, Simone Romano, and James Bailey. Effective global approaches for
mutual information based feature selection. In Proceedings of the 20th ACM SIGKDD international
conference on Knowledge discovery and data mining, pages 512?521. ACM, 2014.
[13] David Barber and Felix Agakov. The im algorithm: a variational approach to information maximization. In Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference,
volume 16, page 201. MIT Press, 2004.
[14] Abhimanyu Das and David Kempe. Submodular meets spectral: Greedy algorithms for subset selection,
sparse approximation and dictionary selection. In Proceedings of the 28th International Conference on
Machine Learning (ICML-11), pages 1057?1064, 2011.
[15] David D Lewis. Feature selection and feature extraction for text categorization. In Proceedings of the
workshop on Speech and Natural Language, pages 212?217. Association for Computational Linguistics,
1992.
[16] Dahua Lin and Xiaoou Tang. Conditional infomax learning: an integrated framework for feature extraction and fusion. In Computer Vision?ECCV 2006, pages 68?82. Springer, 2006.
[17] Shakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsically
motivated reinforcement learning. In Advances in Neural Information Processing Systems, pages 2116?
2124, 2015.
[18] Kiran S Balagani and Vir V Phoha. On the feature selection criterion based on an approximation of
multidimensional mutual information. IEEE Transactions on Pattern Analysis & Machine Intelligence,
(7):1342?1343, 2010.
[19] Nguyen Xuan Vinh, Shuo Zhou, Jeffrey Chan, and James Bailey. Can high-order dependencies improve
mutual information based feature selection? Pattern Recognition, 2015.
[20] Hongrong Cheng, Zhiguang Qin, Chaosheng Feng, Yong Wang, and Fagen Li. Conditional mutual
information-based feature selection analyzing for synergy and redundancy. ETRI Journal, 33(2):210?
218, 2011.
[21] Yingbo Zhou, Utkarsh Porwal, Ce Zhang, Hung Q Ngo, Long Nguyen, Christopher R?e, and Venu Govindaraju. Parallel feature selection inspired by group testing. In Advances in Neural Information Processing
Systems, pages 3554?3562, 2014.
[22] Brian C Ross. Mutual information between discrete and continuous data sets. PloS one, 9(2):e87357,
2014.
[23] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals
and the likelihood ratio by convex risk minimization. Information Theory, IEEE Transactions on,
56(11):5847?5861, 2010.
[24] Shuyang Gao.
Variational feature selection code.
http://github.com/BiuBiuBiLL/
InfoFeatureSelection.
[25] Chris Ding and Hanchuan Peng. Minimum redundancy feature selection from microarray gene expression
data. Journal of bioinformatics and computational biology, 3(02):185?205, 2005.
[26] Kevin Bache and Moshe Lichman. Uci machine learning repository, 2013.
9
| 6444 |@word madelon:2 repository:1 version:1 advantageous:1 stronger:2 d2:2 motoda:1 decomposition:8 citeseer:1 elisseeff:1 pick:1 recursively:2 wrapper:3 liu:2 lichman:1 selecting:1 jimenez:1 rightmost:1 outperforms:3 existing:7 spambase:1 current:1 past:2 com:1 written:2 john:3 cruz:1 plot:2 progressively:1 joy:1 greedy:5 selected:14 generative:1 intelligence:3 regressive:8 node:1 ron:1 zhang:1 become:2 j2s:2 prove:2 fitting:1 manner:1 introduce:2 pairwise:15 peng:2 indeed:2 isi:2 discretized:1 inspired:2 globally:1 decomposed:1 decreasing:1 ming:1 tenfold:1 becomes:6 spain:1 estimating:10 underlying:2 begin:1 gisette:4 medium:1 complimentary:1 developed:1 guarantee:2 multidimensional:1 xd:1 shed:1 tie:1 musk2:1 classifier:6 demonstrates:1 vir:1 omit:1 t1:1 felix:1 despite:1 analyzing:1 meet:1 solely:1 xf2:4 studied:1 etri:1 limited:1 range:1 practical:1 unique:1 testing:2 maximisation:2 x3:3 hanchuan:2 procedure:1 empirical:1 inaive:1 significantly:1 projection:1 convenient:1 get:2 cannot:2 selection:62 twodimensional:1 context:1 huerta:1 risk:1 optimize:2 maximizing:5 convex:1 formulate:1 estimator:4 exact:5 programming:2 hypothesis:1 elkan:1 element:1 satisfying:1 recognition:1 agakov:1 bache:1 ft:5 ding:2 wang:1 calculate:2 irene:1 plo:1 decrease:2 highest:1 mentioned:1 complexity:6 yingbo:1 aram:1 depend:2 rewrite:2 weakly:1 basis:1 joint:2 xiaoou:1 various:2 forced:1 fast:1 shortcoming:1 effective:1 hiroshi:1 artificial:1 kevin:1 choosing:2 exhaustive:1 lymphoma:1 heuristic:2 supplementary:6 otherwise:1 encoder:2 highlighted:1 ip:6 final:2 shakir:1 advantage:1 net:1 propose:1 galstyan:2 qin:1 relevant:1 combining:1 uci:1 generating:1 adam:1 leave:1 i2s:3 xuan:2 categorization:1 derive:1 illustrate:2 nearest:1 eq:29 ois:1 indicate:2 waveform:1 owing:1 filter:5 kiran:1 require:1 decompose:1 brian:1 im:2 mathematically:1 tho:1 hold:2 considered:1 ground:1 gavin:1 krvskp:1 vary:1 adopt:2 angled:1 tq1:2 dictionary:1 label:7 ross:1 vice:1 hope:2 minimization:1 mit:1 always:2 rather:1 avoid:1 zhou:2 conjunction:1 semeion:3 derived:3 rezende:1 indicates:3 likelihood:2 contrast:1 rigorous:2 sigkdd:1 colon:1 dependent:1 landsat:1 integrated:1 abhimanyu:1 selects:1 issue:3 arg:6 flexible:2 among:1 stateof:1 overall:1 classification:1 proposes:1 art:3 kempe:1 mutual:46 extraction:3 phoha:1 x4:3 represents:1 biology:1 unsupervised:1 leukemia:1 icml:1 report:1 np:1 ilb:5 t2:1 few:2 intelligent:1 franc:1 xfk:4 simultaneously:1 divergence:3 usc:1 jeffrey:2 mining:2 aive:1 introduces:1 analyzed:1 bracket:1 utkarsh:1 light:1 edge:1 huan:2 unless:2 tree:5 conduct:1 re:1 theoretical:1 cover:1 formulates:1 maximization:6 introducing:1 subset:6 entry:1 successful:1 stored:1 dependency:4 thickness:1 synthetic:3 cited:1 fundamental:2 density:2 international:2 vm:2 infomax:3 michael:1 concrete:2 moody:1 nongaussian:1 w1:1 x9:2 choose:2 prioritize:1 worse:1 zhao:1 li:1 sec:10 summarized:1 bold:1 explicitly:1 depends:1 try:1 root:1 bayes:10 lung:1 carlos:1 parallel:1 cife:2 vinh:2 greg:1 maximized:1 explain:1 definition:1 against:1 ty:1 nonetheless:1 mohamed:1 james:2 obvious:1 naturally:1 proof:2 mi:10 dataset:2 adjusting:1 govindaraju:1 intrinsically:1 popular:1 recall:1 proved:1 knowledge:2 dimensionality:1 dt:4 supervised:2 x6:1 danilo:1 inspire:1 done:2 strongly:3 correlation:1 hand:4 christopher:1 lack:2 incrementally:1 somehow:1 rodriguez:1 name:1 omitting:1 normalized:3 true:5 verify:2 brown:1 hence:2 equality:1 xft:15 iteratively:1 illustrated:3 inq:1 x5:2 criterion:3 theoretic:15 demonstrate:5 complete:1 l1:1 variational:27 consideration:1 novel:3 charles:1 common:1 conditioning:1 volume:3 association:1 dahua:1 significant:3 isabelle:1 submodular:1 language:1 etc:1 add:1 showed:1 chan:2 optimizing:2 irrelevant:1 scenario:2 certain:1 binary:2 continue:1 battiti:1 scoring:4 seen:1 minimum:2 additional:1 george:1 employed:2 maximize:2 redundant:2 dashed:1 xf:7 plug:1 cross:6 long:3 calculation:1 lin:1 simone:1 paired:1 scalable:1 basic:1 luj:1 vision:1 expectation:1 represent:1 kernel:1 background:1 addition:1 microarray:1 kohavi:1 w2:1 jmi:3 cmim:3 inconsistent:3 effectiveness:1 jordan:1 call:1 ngo:1 noting:1 yang:1 easy:2 variety:1 independence:4 switch:1 xj:2 idea:1 xfi:3 motivated:1 expression:1 cif:1 speech:1 speaking:1 cause:1 romano:1 jie:1 generally:2 remnant:1 detailed:2 fleuret:1 santa:1 xuanlong:1 amount:1 generate:2 http:1 outperform:2 exist:2 problematic:1 andr:1 dotted:1 estimated:2 blue:2 discrete:4 group:3 redundancy:4 clarity:1 ce:1 year:1 noticing:2 powerful:1 family:2 reasonable:1 guyon:1 capturing:1 bound:30 layer:1 dash:1 cheng:1 quadratic:2 strength:1 gregv:1 x2:4 yong:1 x7:1 speed:1 optimality:2 min:1 qb:2 martin:1 developing:1 according:5 precompute:1 combination:2 vmi:26 smaller:1 son:1 y0:9 visa:1 pocock:1 making:1 ln:5 equation:1 mutually:3 previously:3 visualization:1 discus:1 tractable:7 worthwhile:2 appropriate:2 spectral:2 bailey:2 alternative:1 thomas:2 denotes:5 assumes:3 include:1 remaining:2 top:2 graphical:8 linguistics:1 unifying:2 calculating:1 especially:1 feng:1 objective:2 xf1:5 quantity:1 moshe:1 mim:2 dependence:1 traditional:1 southern:1 win:1 venu:1 decoder:2 chris:2 barber:2 mikel:1 assuming:3 code:1 relationship:1 ratio:1 difficult:2 implementation:1 proper:2 twenty:1 perform:1 observation:1 datasets:5 howard:1 defining:1 arbitrary:1 hln:4 introduced:1 david:3 required:1 kl:1 extensive:1 california:1 learned:1 barcelona:1 nip:2 tractably:1 address:3 able:1 usually:2 below:1 pattern:3 summarize:1 hkl:1 interpretability:1 max:8 including:2 ramon:1 wainwright:1 unrealistic:3 difficulty:2 rely:4 natural:2 business:1 representing:1 improve:2 github:1 unselected:2 x8:1 negativity:1 auto:8 naive:24 roberto:1 text:1 geometric:1 literature:2 l2:1 discovery:2 embedded:2 loss:1 expect:1 limitation:3 versus:1 validation:7 consistent:1 informativeness:1 eccv:1 summary:1 last:1 keeping:1 deeper:1 institute:1 wide:1 neighbor:1 sparse:1 dimension:2 valid:1 world:1 forward:8 made:1 reinforcement:1 nguyen:4 shuyang:2 transaction:4 functionals:1 approximate:3 informationtheoretic:1 implicitly:2 synergy:1 gene:1 global:3 ver:1 xi:38 factorize:1 continuous:4 decade:1 table:7 promising:1 nature:1 synthethic:1 correlated:1 symmetry:1 complex:1 da:1 shuo:1 did:1 main:1 promoter:1 rh:1 whole:1 steeg:1 contradicting:1 categorized:1 body:1 x1:4 fig:8 wiley:1 third:1 splice:1 tang:1 theorem:4 specific:3 xt:1 list:1 x:82 svm:1 ionosphere:1 fusion:1 exists:1 intractable:2 workshop:1 sequential:1 adding:1 conditioned:2 gap:1 suited:1 entropy:1 distinguishable:1 simply:1 gao:3 hua:1 springer:2 truth:1 lewis:1 acm:2 conditional:9 viewed:2 optdigits:1 endeavor:1 mrmr:6 quantifying:1 porwal:1 replace:1 hard:1 total:2 called:1 experimental:1 indicating:1 select:1 formally:1 relevance:3 bioinformatics:1 hung:1 |
6,019 | 6,445 | Fast Algorithms for Robust PCA via Gradient
Descent
Xinyang Yi? Dohyung Park? Yudong Chen? Constantine Caramanis?
?
?
The University of Texas at Austin
Cornell University
?
?
{yixy,dhpark,constantine}@utexas.edu
yudong.chen@cornell.edu
Abstract
We consider the problem of Robust PCA in the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem.
From a statistical standpoint this problem has been recently well-studied, and
conditions on when recovery is possible (how many observations do we need,
how many corruptions can we tolerate) via polynomial-time algorithms is by
now understood. This paper presents and analyzes a non-convex optimization
approach that greatly reduces the computational complexity of the above problems, compared to the best available algorithms. In particular, in the fully observed case, with r denoting rank and d dimension, we reduce the complexity
from O(r2 d2 log(1/?)) to O(rd2 log(1/?)) ? a big savings when the rank is big.
For the partially observed case, we show the complexity of our algorithm is no
more than O(r4 d log d log(1/?)). Not only is this the best-known run-time for a
provable algorithm under partial observation, but in the setting where r is small
compared to d, it also allows for near-linear-in-d run-time that can be exploited in
the fully-observed case as well, by simply running our algorithm on a subset of the
observations.
1
Introduction
Principal component analysis (PCA) aims to find a low rank subspace that best-approximates a data
matrix Y ? Rd1 ?d2 . The simple and standard method of PCA by singular value decomposition
(SVD) fails in many modern data problems due to missing and corrupted entries, as well as sheer scale
of the problem. Indeed, SVD is highly sensitive to outliers by virtue of the squared-error criterion
it minimizes. Moreover, its running time scales as O(rd2 ) to recover a rank r approximation of a
d-by-d matrix.
While there have been recent results developing provably robust algorithms for PCA (e.g., [5, 26]), the
running times range from O(r2 d2 ) to O(d3 ) and hence are significantly worse than SVD. Meanwhile,
the literature developing sub-quadratic algorithms for PCA (e.g., [15, 14, 3]) seems unable to
guarantee robustness to outliers or missing data.
Our contribution lies precisely in this area: provably robust algorithms for PCA with improved
run-time. Specifically, we provide an efficient algorithm with running time that matches SVD while
nearly matching the best-known robustness guarantees. In the case where rank is small compared to
dimension, we develop an algorithm with running time that is nearly linear in the dimension. This
last algorithm works by subsampling the data, and therefore we also show that our algorithm solves
the Robust PCA problem with partial observations (a generalization of matrix completion and Robust
PCA).
1.1 The Model and Related Work
We consider the following setting for robust PCA. Suppose we are given a matrix Y ? Rd1 ?d2 that
has decomposition Y = M ? + S ? , where M ? is a rank r matrix and S ? is a sparse corruption matrix
containing entries with arbitrary magnitude. The goal is to recover M ? and S ? from Y . To ease
notation, we let d1 = d2 = d in the remainder of this section.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Provable solutions for this model are first provided in the works of [9] and [5]. They propose to solve
this problem by convex relaxation:
min |||M |||nuc + ?kSk1 , s.t. Y = M + S,
M,S
(1)
where |||M |||nuc denotes the nuclear norm of M . Despite analyzing the same method, the corruption
models in [5] and [9] differ. In [5], the authors consider the setting where the entries of M ? are
corrupted at random with probability ?. They show their method succeeds in exact recovery with
? as large as 0.1, which indicates they can tolerate a constant fraction of corruptions. Work in [9]
considers a deterministic corruption model, where nonzero entries of S ? can have arbitrary position,
but the sparsity of each row
? and column does not exceed ?d. They prove that for exact recovery, it
can allow ? = O(1/(?r d)). This was subsequently further improved to ? = O(1/(?r)), which is
in fact optimal [11, 18]. Here, ? represents the incoherence of M ? (see Section 2 for details). In this
paper, we follow this latter line and focus on the deterministic corruption model.
The state-of-the-art solver [20] for (1) has time complexity O(d3 /?) to achieve error ?, and is thus
much slower than SVD, and prohibitive for even modest values of d. Work in [21] considers the
deterministic corruption model, and improves this running time without sacrificing the robustness
guarantee on ?. They propose an alternating projection (AltProj) method to estimate the low
rank and sparse structures iteratively and simultaneously, and show their algorithm has complexity
O(r2 d2 log(1/?)), which is faster than the convex approach but still slower than SVD.
Non-convex approaches have recently seen numerous developments for applications in low-rank
estimation, including alternating minimization (see e.g. [19, 17, 16]) and gradient descent (see e.g.
[4, 12, 23, 24, 29, 30]). These works have fast running times, yet do not provide robustness guarantees.
One exception is [12], where the authors analyze a row-wise `1 projection method for recovering
S ? . Their analysis hinges on positive semidefinite M ? , and the algorithm requires prior knowledge
of the `1 norm of every row of S ? and is thus prohibitive in practice. Another exception is work
[16], which analyzes alternating minimization plus an overall sparse projection. Their algorithm is
shown to tolerate at most a fraction of ? = O(1/(?2/3 r2/3 d)) corruptions. As we discuss in Section
1.2, we can allow S ? to have much higher sparsity ? = O(1/(?r1.5 )), which is close to optimal.
It is worth mentioning other works that obtain provable guarantees of non-convex algorithms or
problems including phase retrieval [6, 13, 28], EM algorithms [2, 25, 27], tensor decompositions [1]
and second order method [22]. It might be interesting to bring robust considerations to these works.
1.2 Our Contributions
In this paper, we develop efficient non-convex algorithms for robust PCA. We propose a novel
algorithm based on the projected gradient method on the factorized space. We also extend it to solve
robust PCA in the setting with partial observations, i.e., in addition to gross corruptions, the data
matrix has a large number of missing values. Our main contributions are summarized as follows.1
1. We propose a novel sparse estimator for the setting of deterministic corruptions. For the low-rank
structure to be identifiable, it is natural to assume that deterministic corruptions are ?spread out? (no
more than some number in each row/column). We leverage this information in a simple but critical
algorithmic idea, that is tied to the ultimate complexity advantages our algorithm delivers.
2. Based on the proposed sparse estimator, we propose a projected gradient method on the matrix
factorized space. While non-convex, the algorithm is shown to enjoy linear convergence under proper
initialization. Along with a new initialization method, we show that robust PCA can be solved
within complexity O(rd2 log(1/?)) while ensuring robustness ? = O(1/(?r1.5 )). Our algorithm is
thus faster than the best previous known algorithm by a factor of r, and enjoys superior empirical
performance as well.
3. Algorithms for Robust PCA with partial observations still rely on a computationally expensive
convex approach, as apparently this problem has evaded treatment by non-convex methods. We
consider precisely this problem. In a nutshell, we show that our gradient method succeeds (it is
guaranteed to produce the subspace of M ? ) even when run on no more than O(?2 r2 d log d) random
entries of Y . The computational cost is O(?3 r4 d log d log(1/?)). When rank r is small compared to
the dimension d, in fact this dramatically improves on our bound above, as our cost becomes nearly
linear in d. We show, moreover, that this savings and robustness to erasures comes at no cost in the
To ease presentation, the discussion here assumes M ? has constant condition number, whereas our results
below show the dependence on condition number explicitly.
1
2
robustness guarantee for the deterministic (gross) corruptions. While this demonstrates our algorithm
is robust to both outliers and erasures, it also provides a way to reduce computational costs even in
the fully observed setting, when r is small.
4. An immediate corollary of the above result provides a guarantee for exact matrix completion, with
general rectangular matrices, using O(?2 r2 d log d) observed entries and O(?3 r4 d log d log(1/?))
time, thereby improving on existing results in [12, 23].
Notation. For any index
set ? ? [d1 ] ? [d2 ], we let ?(i,?) := (i, j) ? ? j ? [d2 ] , ?(?,j) :=
(i, j) ? ? i ? [d1 ] . For any matrix A ? Rd1 ?d2 , we denote its projector onto support ? by
?? (A), i.e., the (i, j)-th entry of ?? (A) is equal to A if (i, j) ? ? and zero otherwise. The i-th
row and j-th column of A are denoted by A(i,?) and A(?,j) . The (i, j)-th entry is denoted as A(i,j) .
Operator norm of A is |||A|||op . Frobenius norm of A is |||A|||F . The `a /`b norm of A is denoted by
|||A|||b,a , i.e., the `a norm of the vector formed by the `b norm of every row. For instance, kAk2,?
stands for maxi?[d1 ] kA(i,?) k2 .
2
Problem Setup
We consider the problem where we observe a matrix Y ? Rd1 ?d2 that satisfies Y = M ? + S ? , where
M ? has rank r, and S ? is corruption matrix with sparse support. Our goal is to recover M ? and S ? .
In the partially observed setting, in addition to sparse corruptions, we have erasures. We assume that
each entry of M ? + S ? is revealed independently with probability p ? (0, 1). In particular, for any
(i, j) ? [d1 ] ? [d2 ], we consider the Bernoulli model where
?
(M + S ? )(i,j) , with probability p;
Y(i,j) =
(2)
?,
otherwise.
We denote the support of Y by ? = {(i, j) | Y(i,j) 6= ?}. Note that we assume S ? is not adaptive to
?. As is well-understood thanks to work in matrix completion, this task is impossible in general ?
we need to guarantee that M ? is not both low-rank and sparse. To avoid such identifiability issues,
we make the following standard assumptions on M ? and S ? : (i) M ? is not near-sparse or ?spiky.?
We impose this by requiring M ? to be ?-incoherent ? given a singular value decomposition (SVD)
M ? = L? ?? R?> , we assume that
r
r
?r
?r
?
?
kL k2,? ?
, kR k2,? ?
.
d1
d2
(ii) The entries of S ? are ?spread out? ? for ? ? [0, 1), we assume S ? ? S? , where
S? := A ? Rd1 ?d2 kA(i,?) k0 ? ?d2 for all i ? [d1 ] ; kA(?,j) k0 ? ?d1 for all j ? [d2 ] .
(3)
In other words, S ? contains at most ?-fraction nonzero entries per row and column.
3
Algorithms
For both the full and partial observation settings, our method proceeds in two phases. In the first
phase, we use a new sorting-based sparse estimator to produce a rough estimate Sinit for S ? based on
the observed matrix Y , and then find a rank r matrix factorized as U0 V0> that is a rough estimate
of M ? by performing SVD on (Y ? Sinit ). In the second phase, given (U0 , V0 ), we perform an
iterative method to produce series {(Ut , Vt )}?
t=0 . In each step t, we first apply our sparse estimator
to produce a sparse matrix St based on (Ut , Vt ), and then perform a projected gradient descent
step on the low-rank factorized space to produce (Ut+1 , Vt+1 ). This flow is the same for full and
partial observations, though a few details differ. Algorithm 1 gives the full observation algorithm,
and Algorithm 2 gives the partial observation algorithm. We now describe the key details of each
algorithm.
Sparse Estimation. A natural idea is to keep those entries of residual matrix Y ? M that have large
magnitude. At the same time, we need to make use of the dispersed property of S? that every column
and row contain at most ?-fraction of nonzero entries. Motivated by these two principles, we introduce
the following sparsification operator: For any matrix A ? Rd1 ?d2 : for all (i, j) ? [d1 ] ? [d2 ], we let
(
(?d )
(?d )
A(i,j) , if |A(i,j) | ? |A(i,?)2 | and |A(i,j) | ? |A(?,j)1 |
T? [A] :=
,
(4)
0,
otherwise
3
(k)
(k)
where A(i,?) and A(?,j) denote the elements of A(i,?) and A(?,j) that have the k-th largest magnitude
respectively. In other words, we choose to keep those elements that are simultaneously among the
largest ?-fraction entries in the corresponding row and column. In the case of entries having identical
magnitude, we break ties arbitrarily. It is thus guaranteed that T? [A] ? S? .
Algorithm 1 Fast RPCA
INPUT: Observed matrix Y with rank r and corruption fraction ?; parameters ?, ?; number of
iterations T .
// Phase I: Initialization.
1: Sinit ? T? [Y ]
// see (4) for the definition of T? [?].
2: [L, ?, R] ? SVDr [Y ? Sinit ] 2
3: U0 ? L?1/2 , V0 ? R?1/2 . Let U, V be defined according to (7).
// Phase II: Gradient based iterations.
4: U0 ? ?U (U0 ), V0 ? ?V (V0 )
5: for t = 0, 1, . .. , T ? 1 do
6:
St ? T?? Y ? Ut Vt>
7:
Ut+1 ? ?U Ut ? ??U L(Ut , Vt ; St ) ? 12 ?Ut (Ut> Ut ? Vt> Vt )
8:
Vt+1 ? ?V Vt ? ??V L(Ut , Vt ; St ) ? 21 ?Vt (Vt> Vt ? Ut> Ut )
9: end for
OUTPUT: (UT , VT )
Initialization. In the fully observed setting, we compute Sinit based on Y as Sinit = T? [Y ]. In
the partially observed setting with sampling rate p, we let Sinit = T2p? [Y ]. In both cases, we then
set U0 = L?1/2 and V0 = R?1/2 , where L?R> is an SVD of the best rank r approximation of
Y ? Sinit .
Gradient Method on Factorized Space. After initialization, we proceed by projected gradient
descent. To do this, we define loss functions explicitly in the factored space, i.e., in terms of U, V and
S:
1
L(U, V ; S) :=
|||U V > + S ? Y |||2F ,
(fully observed)
(5)
2
e V ; S) := 1 |||?? U V > + S ? Y |||2F .
L(U,
(partially observed)
(6)
2p
Recall that our goal is to recover M ? that satisfies the ?-incoherent condition. Given an SVD
M ? = L? ?R?> , we expect that the solution (U, V ) is close to (L? ?1/2 , R? ?1/2 ) up to some
rotation. In order to serve such ?-incoherent structure, it is natural to put constraints on the row
norms of U, V based on |||M ? |||op . As |||M ? |||op is unavailable, given U0 , V0 computed in the first phase,
we rely on the sets U, V defined as
r
r
2?r
2?r
d1 ?r
d2 ?r
U := A ? R
kAk2,? ?
|||U0 |||op , V := A ? R
kAk2,? ?
|||V0 |||op .
d1
d2
(7)
Now we consider the following optimization problems with constraints:
min
U ?U ,V ?V,S?S?
min
U ?U ,V ?V,S?Sp?
1
L(U, V ; S) + |||U > U ? V > V |||2F ,
8
e V ; S) + 1 |||U > U ? V > V |||2 .
L(U,
F
64
(fully observed)
(partially observed)
(8)
(9)
The regularization term in the objectives above is used to encourage that U and V have the same
scale. Given (U0 , V0 ), we propose the following iterative method to produce series {(Ut , Vt )}?
t=0
and {St }?
t=0 . We give the details for the fully observed case ? the partially observed case is similar.
1
SVDr [A] stands for computing a rank-r SVD of matrix A, i.e., finding the top r singular values and vectors
of A. Note that we only need to compute rank-r SVD approximately (see the initialization error requirement in
Theorem 1) so that we can leverage fast iterative approaches such as block power method and Krylov subspace
methods.
4
For t = 0, 1, . . ., we update St using the sparse estimator St = T?? Y ? Ut Vt> , followed by a
projected gradient update on Ut and Vt :
1
>
>
Ut+1 = ?U Ut ? ??U L(Ut , Vt ; St ) ? ?Ut (Ut Ut ? Vt Vt ) ,
2
1
>
>
Vt+1 = ?V Vt ? ??V L(Ut , Vt ; St ) ? ?Vt (Vt Vt ? Ut Ut ) .
2
Here ? is the model parameter that characterizes the corruption fraction, ? and ? are algorithmic
tunning parameters, which we specify in our analysis. Essentially, the above algorithm corresponds
to applying projected gradient method to optimize (8), where S is replaced by the aforementioned
sparse estimator in each step.
Algorithm 2 Fast RPCA with partial observations
INPUT: Observed matrix Y with support ?; parameters ?, ?, ?; number of iterations T .
// Phase I: Initialization.
1: Sinit ? T2p? [?? (Y )]
2: [L, ?, R] ? SVDr [ p1 (Y ? Sinit )]
3: U0 ? L?1/2 , V0 ? R?1/2 . Let U, V be defined according to (7).
// Phase II: Gradient based iterations.
4: U0 ? ?U (U0 ), V0 ? ?V (V0 )
5: for t = 0, 1, . . ., T ? 1 do
>
6:
St ? T?p? ?
? Y ? Ut V t
e t , Vt ; St ) ? 1 ?Ut (Ut> Ut ? Vt> Vt )
7:
Ut+1 ? ?U Ut ? ??U L(U
16
e t , Vt ; St ) ? 1 ?Vt (Vt> Vt ? Ut> Ut )
8:
Vt+1 ? ?V Vt ? ??V L(U
16
9: end for
OUTPUT: (UT , VT )
4
Main Results
4.1 Analysis of Algorithm 1
We begin with some definitions and notation. It is important to define a proper error metric because
the optimal solution corresponds to a manifold and there are many distinguished pairs (U, V ) that
minimize (8). Given the SVD of the true low-rank matrix M ? = L? ?? R?> , we let U ? := L? ??1/2
and V ? := R? ??1/2 . We also let ?1? ? ?2? ? . . . ? ?r? be sorted nonzero singular values of
M ? , and denote the condition number of M ? by ?, i.e., ? := ?1? /?r? . We define estimation error
d(U, V ; U ? , V ? ) as the minimal Frobenius norm between (U, V ) and (U ? , V ? ) with respect to the
optimal rotation, namely
q
d(U, V ; U ? , V ? ) := min |||U ? U ? Q|||2F + |||V ? V ? Q|||2F ,
(10)
Q?Qr
for Qr the set of r-by-r orthonormal matrices. This metric controls reconstruction error, as
p
(11)
|||U V > ? M ? |||F . ?1? d(U, V ; U ? , V ? ),
p
when d(U, V ; U ? , V ? ) ? ?1? . We denote the local region around the optimum (U ? , V ? ) with
radius ? as
B2 (?) := (U, V ) ? Rd1 ?r ? Rd2 ?r d(U, V ; U ? , V ? ) ? ? .
The next two theorems provide guarantees for the initialization phase and gradient iterations, respectively, of Algorithm 1.
Theorem 1 (Initialization). Consider the paired (U0 , V0 ) produced in the first phase of Algorithm 1.
If ? ? 1/(16??r), we have
?
? p
d(U0 , V0 ; U ? , V ? ) ? 28 ???r r ?1? .
5
Theorem 2 (Convergence). Consider the second phase of Algorithm 1. Suppose we choose ? = 2
2
and ? = c/?1? for any
cp? 1/36.
There exist constants c1 , c2 such that when ? ? c1 /(? ?r), given
any (U0 , V0 ) ? B2 c2 ?r? /? , the iterates {(Ut , Vt )}?
t=0 satisfy
c t 2
d2 (Ut , Vt ; U ? , V ? ) ? 1 ?
d (U0 , V0 ; U ? , V ? ).
8?
Therefore, using proper initialization and step size, the gradient iteration converges at a linear
rate with a constant contraction factor 1 ? O(1/?). To obtain relative precision ? compared to
the initial error, it suffices to perform
? O(? log(1/?)) iterations. Note that the step size is chosen
according to 1/?1? . When ? . 1/(? ?r3 ), Theorem 1 and the inequality (11) together imply that
|||U0 V0> ? M ? |||op ? 21 ?1? . Hence we can set the step size as ? = O(1/?1 (U0 V0> )) using being the
top singular value ?1 (U0 V0> ) of the matrix U0 V0>
Combining Theorems 1 and 2 implies the following result that provides an overall guarantee for
Algorithm 1.
Corollary 1. Suppose that
(
? ? c min
1
1
? 3, 2
??
r
? ?r
)
for some constant c. Then for any ? ? (0, 1), Algorithm 1 with T = O(? log(1/?)) outputs a pair
(UT , VT ) that satisfies
|||UT VT> ? M ? |||F ? ? ? ?r? .
(12)
Remark 1 (Time Complexity). For simplicity we assume d1 = d2 = d. Our sparse estimator (4)
can be implemented by finding the top ?d elements of each row and column via partial quick sort,
which has running time O(d2 log(?d)). Performing rank-r SVD in the first phase and computing the
gradient in each iteration both have complexity O(rd2 ).3 Algorithm 1 thus has total running time
O(?rd2 log(1/?)) for achieving an accuracy as in (12). We note that when ? = O(1), our algorithm
is orderwise faster than the AltProj algorithm in [21], which has running time O(r2 d2 log(1/?)).
Moreover, our algorithm only requires computing one singular value decomposition.
Remark 2 (Robustness). Assuming
? = O(1), our algorithm
? can tolerate corruption at a sparsity
?
level up to ? = O(1/(?r r)). This is worse by a factor r compared to the optimal statistical
guarantee 1/(?r) obtained in [11, 18, 21]. This looseness is a consequence of the condition for
(U0 , V0 ) in Theorem 2. Nevertheless, when ?r = O(1), our algorithm can tolerate a constant ?
fraction of corruptions.
4.2 Analysis of Algorithm 2
We now move to the guarantees of Algorithm 2. We show here that not only can we handle partial
observations, but in fact subsampling the data in the fully observed case can significantly reduce the
time complexity from the guarantees given in the previous section without sacrificing robustness. In
particular, for smaller values of r, the complexity of Algorithm 2 has near linear dependence on the
dimension d, instead of quadratic.
In the following discussion, we let d := max{d1 , d2 }. The next two results control the quality of the
initialization step, and then the gradient iterations.
Theorem 3 (Initialization, partial observations). Suppose the observed indices ? follow the Bernoulli
model given in (2). Consider the pair (U0 , V?
0 ) produced in the first phase of Algorithm 2. There exist
constants {ci }3i=1 such that for any ? (0, r/(8c1 ?)), if
2
1
?r
1
log d
??
, p ? c2
+
,
(13)
64??r
2
? d1 ? d2
then we have
p
?
? p
d(U0 , V0 ; U ? , V ? ) ? 51 ???r r ?1? + 7c1 ??1? ,
with probability at least 1 ? c3 d?1 .
3
In fact, it suffices to compute the best rank-r approximation with running time independent of the eigen gap.
6
Theorem 4 (Convergence, partial observations). Suppose the observed indices ? follow the Bernoulli
model given in (2). Consider the second phase of Algorithm 2. Suppose we choose ? = 3, and
? = c/(?r?1? ) for a sufficiently small constant c. There exist constants {ci }4i=1 such that if
??
c1
?2 ?r
and p ? c2
?4 ?2 r2 log d
,
d1 ? d2
(14)
then with probability at least 1 ? c3 d?1 , the iterates {(Ut , Vt )}?
t=0 satisfy
t
c
d2 (Ut , Vt ; U ? , V ? ) ? 1 ?
d2 (U0 , V0 ; U ? , V ? )
64?r?
p
for all (U0 , V0 ) ? B2 c4 ?r? /? .
Setting p = 1 in the above result recovers Theorem 2 up to an additional factor ?r in the contraction
factor. For achieving ? relative accuracy, now we need O(?r? log(1/?)) iterations. Putting Theorems
3 and 4 together, we have the following overall guarantee for Algorithm 2.
Corollary 2. Suppose that
(
)
1
1
?4 ?2 r2 log d
,
, p ? c0
? ? c min
? 3, 2
d1 ? d2
? ?r ?? r
for some constants c, c0 . With probability at least 1 ? O(d?1 ), for any ? ? (0, 1), Algorithm 2 with
T = O(?r? log(1/?)) outputs a pair (UT , VT ) that satisfies
|||UT VT> ? M ? |||F ? ? ? ?r? .
(15)
This result shows that partial observations do not compromise robustness to sparse corruptions: as
long as the observation probability p satisfies the condition in Corollary 2, Algorithm 2 enjoys the
same robustness guarantees as the method using all entries. Below we provide two remarks on the
sample and time complexity. For simplicity, we assume d1 = d2 = d, ? = O(1).
Remark 3 (Sample complexity and matrix completion). Using the lower bound on p, it is sufficient
to have O(?2 r2 d log d) observed entries. In the special case S ? = 0, our partial observation model
is equivalent to the model of exact matrix completion (see, e.g., [8]). We note that our sample
complexity (i.e., observations needed) matches that of completing a positive semidefinite (PSD)
matrix by gradient descent as shown in [12], and is better than the non-convex matrix completion
algorithms in [19] and [23]. Accordingly, our result reveals the important fact that we can obtain
robustness in matrix completion without deterioration of our statistical guarantees. It is known that
that any algorithm for solving exact matrix completion must have sample size ?(?rd log d) [8], and a
nearly tight upper bound O(?rd log2 d) is obtained in [10] by convex relaxation. While sub-optimal
by a factor ?r, our algorithm is much faster than convex relaxation as shown below.
Remark 4 (Time complexity). Our sparse estimator on the sparse matrix with support ? can be
implemented via partial quick sort with running time O(pd2 log(?pd)). Computing the gradient
in each step involves the two terms in the objective function (9). Computing the gradient of the
first term Le takes time O(r|?|), whereas the second term takes time O(r2 d). In the initialization
phase, performing rank-r SVD on a sparse matrix with support ? can be done in time O(r|?|). We
conclude that when |?| = O(?2 r2 d log d), Algorithm 2 achieves the error bound (15) with running
time O(?3 r4 d log d log(1/?)). Therefore, in the small rank setting with r d1/3 , even when full
observations are given, it is better to use Algorithm 2 by subsampling the entries of Y .
5
Numerical Results
In this section, we provide numerical results and compare the proposed algorithms with existing
methods, including the inexact augmented lagrange multiplier (IALM) approach [20] for solving
the convex relaxation (1) and the alternating projection (AltProj) algorithm proposed in [21]. All
algorithms are implemented in MATLAB 4 , and the codes for existing algorithms are obtained from
their authors. SVD computation in all algorithms uses the PROPACK library.5 We ran all simulations
on a machine with Intel 32-core Xeon (E5-2699) 2.3GHz with 240GB RAM.
4
5
Our code is available at https://www.yixinyang.org/code/RPCA_GD.zip.
http://sun.stanford.edu/~rmunk/PROPACK/
7
d(U, V ; U ? , V ? )
GD p = 1
GD p = 0.5
GD p = 0.2
10 3
Time(secs)
d(U, V ; U ? , V ? )
10 1
10 0
10 -1
10 2
10 1
10 -2
GD p = 1
GD p = 0.5
GD p = 0.2
GD p = 0.1
AltProj
IALM
10 0
10 -1
10 -2
10 -3
0
2
4
6
8
10 3
10
Iteration count
10 4
10 5
0
(a)
20
40
60
80
100
Time(secs)
Dimension d
(b)
(c)
Figure 1: Results on synthetic data. (a) Plot of log estimation error versus number of iterations when using
gradient descent (GD) with varying sub-sampling rate p. It is conducted using d = 5000, r = 10, ? = 0.1.
(b) Plot of running time of GD versus dimension d with r = 10, ? = 0.1, p = 0.15r2 log d/d. The low-rank
matrix is recovered in all instances, and the line has slope approximately one. (c) Plot of log estimation error
versus running time for different algorithms in problem with d = 5000, r = 10, ? = 0.1.
Original
GD (49.8s)
GD, 20% sample (18.1s)
AltProj (101.5s)
IALM (434.6s)
Original
GD (87.3s)
GD, 20% sample (43.4s)
AltProj (283.0s)
IALM (801.4s)
Figure 2: Foreground-background separation in Restaurant and ShoppingMall videos. In each line, the leftmost
image is an original frame, and the other four are the separated background obtained from our algorithms with
p = 1, p = 0.2, AltProj, and IALM. The running time required by each algorithm is shown in the title.
Synthetic Datasets. We generate a squared data matrix Y = M ? + S ? ? Rd?d as follows. The
low-rank part M ? is given by M ? = AB > , where A, B ? Rd?r have entries drawn independently
from a zero mean Gaussian distribution with variance 1/d. For a given sparsity parameter ?, each
entry of S ? is set to be nonzero with probability ?, and the values of the nonzero entries are sampled
uniformly from [?5r/d, 5r/d]. The results are summarized in Figure 1. Figure 1a shows the
convergence of our algorithms for different random instances with different sub-sampling rate p.
Figure 1b shows the running time of our algorithm with partially observed data. We note that our
algorithm is memory-efficient: in the large scale setting with d = 2 ? 105 , using approximately
0.1% entries is sufficient for the successful recovery. In contrast, AltProj and IALM are designed
to manipulate the entire matrix with d2 = 4 ? 1010 entries, which is prohibitive on single machine.
Figure 1c compares our algorithms with AltProj and IALM by showing reconstruction error versus
real running time. Our algorithm requires significantly less computation to achieve the same accuracy
level, and using only a subset of the entries provides additional speed-up.
Foreground-background Separation. We apply our method to the task of foreground-background
(FB) separation in a video. We use two public benchmarks, the Restaurant and ShoppingMall
datasets.6 Each dataset contains a video with static background. By vectorizing and stacking the
frames as columns of a matrix Y , the FB separation problem can be cast as RPCA, where the static
background corresponds to a low rank matrix M ? with identical columns, and the moving objects in
the video can be modeled as sparse corruptions S ? . Figure 2 shows the output of different algorithms
on two frames from the dataset. Our algorithms require significantly less running time than both
AltProj and IALM. Moreover, even with 20% sub-sampling, our methods still seem to achieve
better separation quality. The details about parameter setting and more results are deferred to the
supplemental material.
6
http://perception.i2r.a-star.edu.sg/bk_model/bk_index.html
8
References
[1] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent
variable models. The Journal of Machine Learning Research, 15(1):2773?2832, 2014.
[2] Sivaraman Balakrishnan, Martin J. Wainwright, and Bin Yu. Statistical guarantees for the em algorithm: From population to samplebased analysis. arXiv preprint arXiv:1408.2156, 2014.
[3] Srinadh Bhojanapalli, Prateek Jain, and Sujay Sanghavi. Tighter low-rank approximation via sampling the leveraged element. In
Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 902?920. SIAM, 2015.
[4] Srinadh Bhojanapalli, Anastasios Kyrillidis, and Sujay Sanghavi. Dropping convexity for faster semi-definite optimization. arXiv
preprint arXiv:1509.03917, 2015.
[5] Emmanuel J. Cand?s, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM (JACM),
58(3):11, 2011.
[6] Emmanuel J. Cand?s, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE
Transactions on Information Theory, 61(4):1985?2007, 2015.
[7] Emmanuel J. Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009.
[8] Emmanuel J. Cand?s and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on
Information Theory, 56(5):2053?2080, 2010.
[9] Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, and Alan S. Willsky. Rank-sparsity incoherence for matrix decomposition.
SIAM Journal on Optimization, 21(2):572?596, 2011.
[10] Yudong Chen. Incoherence-optimal matrix completion. IEEE Transactions on Information Theory, 61(5):2909?2923, 2015.
[11] Yudong Chen, Ali Jalali, Sujay Sanghavi, and Constantine Caramanis. Low-rank matrix recovery from errors and erasures. IEEE
Transactions on Information Theory, 59(7):4324?4337, 2013.
[12] Yudong Chen and Martin J. Wainwright. Fast low-rank estimation by projected gradient descent: General statistical and algorithmic
guarantees. arXiv preprint arXiv:1509.03025, 2015.
[13] Yuxin Chen and Emmanuel J. Cand?s. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In
Advances in Neural Information Processing Systems, pages 739?747, 2015.
[14] Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the
forty-fifth annual ACM symposium on Theory of computing, pages 81?90. ACM, 2013.
[15] Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast monte-carlo algorithms for finding low-rank approximations. Journal of the ACM
(JACM), 51(6):1025?1041, 2004.
[16] Quanquan Gu, Zhaoran Wang, and Han Liu. Low-rank and sparse structure pursuit via alternating minimization. In Proceedings of the
19th International Conference on Artificial Intelligence and Statistics, pages 600?609, 2016.
[17] Moritz Hardt. Understanding alternating minimization for matrix completion. In 2014 IEEE 55th Annual Symposium on Foundations of
Computer Science (FOCS), pages 651?660. IEEE, 2014.
[18] Daniel Hsu, Sham M. Kakade, and Tong Zhang. Robust matrix decomposition with sparse corruptions. IEEE Transactions on Information
Theory, 57(11):7221?7234, 2011.
[19] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of
the forty-fifth annual ACM symposium on Theory of computing, pages 665?674. ACM, 2013.
[20] Zhouchen Lin, Minming Chen, and Yi Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices.
Arxiv preprint arxiv:1009.5055v3, 2013.
[21] Praneeth Netrapalli, UN Niranjan, Sujay Sanghavi, Animashree Anandkumar, and Prateek Jain. Non-convex robust pca. In Advances in
Neural Information Processing Systems, pages 1107?1115, 2014.
[22] Ju Sun, Qing Qu, and John Wright. When are nonconvex problems not scary? arXiv preprint arXiv:1510.06096, 2015.
[23] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via nonconvex factorization. In 2015 IEEE 56th Annual Symposium on
Foundations of Computer Science (FOCS), pages 270?289. IEEE, 2015.
[24] Stephen Tu, Ross Boczar, Mahdi Soltanolkotabi, and Benjamin Recht. Low-rank solutions of linear matrix equations via procrustes flow.
arXiv preprint arXiv:1507.03566, 2015.
[25] Zhaoran Wang, Quanquan Gu, Yang Ning, and Han Liu. High dimensional em algorithm: Statistical optimization and asymptotic
normality. In Advances in Neural Information Processing Systems, pages 2512?2520, 2015.
[26] Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust pca via outlier pursuit. IEEE Transactions on Information Theory,
58(5):3047?3064, May 2012.
[27] Xinyang Yi and Constantine Caramanis. Regularized em algorithms: A unified framework and statistical guarantees. In Advances in
Neural Information Processing Systems, pages 1567?1575, 2015.
[28] Huishuai Zhang, Yuejie Chi, and Yingbin Liang. Provable non-convex phase retrieval with outliers: Median truncated wirtinger flow.
arXiv preprint arXiv:1603.03805, 2016.
[29] Tuo Zhao, Zhaoran Wang, and Han Liu. A nonconvex optimization framework for low rank matrix estimation. In Advances in Neural
Information Processing Systems, pages 559?567, 2015.
[30] Qinqing Zheng and John Lafferty. A convergent gradient descent algorithm for rank minimization and semidefinite programming from
random linear measurements. In Advances in Neural Information Processing Systems, pages 109?117, 2015.
9
| 6445 |@word polynomial:1 seems:1 norm:9 c0:2 d2:31 simulation:1 decomposition:8 contraction:2 minming:1 thereby:1 initial:1 liu:3 contains:2 series:2 daniel:2 denoting:1 woodruff:1 xinyang:2 existing:3 ksk1:1 ka:3 recovered:1 luo:1 yet:1 must:1 john:3 numerical:2 plot:3 designed:1 update:2 rd2:6 intelligence:1 prohibitive:3 accordingly:1 propack:2 core:1 yuxin:1 provides:4 iterates:2 org:1 zhang:2 along:1 c2:4 symposium:5 focs:2 prove:1 introduce:1 indeed:1 p1:1 cand:5 chi:1 zhi:1 solver:1 becomes:1 spain:1 provided:1 moreover:4 notation:3 begin:1 factorized:5 bhojanapalli:2 prateek:3 minimizes:1 supplemental:1 unified:1 finding:3 sparsification:1 guarantee:19 every:3 nutshell:1 tie:1 demonstrates:1 k2:3 control:2 enjoy:1 positive:2 understood:2 local:1 consequence:1 despite:1 analyzing:1 incoherence:3 approximately:3 orderwise:1 might:1 plus:1 initialization:13 studied:1 r4:4 yuejie:1 ease:2 mentioning:1 factorization:1 range:1 practice:1 block:1 definite:1 erasure:4 area:1 empirical:1 significantly:4 matching:1 projection:4 word:2 onto:1 close:2 operator:2 put:1 impossible:1 applying:1 optimize:1 equivalent:1 deterministic:6 projector:1 missing:3 quick:2 www:1 independently:2 convex:17 rectangular:1 rmunk:1 simplicity:2 recovery:6 factored:1 estimator:8 nuclear:1 tunning:1 orthonormal:1 population:1 handle:1 suppose:7 exact:7 programming:1 us:1 boczar:1 element:4 expensive:1 observed:23 preprint:7 solved:1 wang:3 region:1 sun:3 ran:1 gross:2 benjamin:2 pd:1 convexity:1 complexity:15 solving:4 tight:1 compromise:1 ali:1 serve:1 gu:2 k0:2 caramanis:4 separated:1 jain:3 fast:7 describe:1 monte:1 artificial:1 stanford:1 solve:2 otherwise:3 statistic:1 advantage:1 propose:6 reconstruction:2 remainder:1 tu:1 combining:1 achieve:3 frobenius:2 qr:2 convergence:4 requirement:1 r1:2 optimum:1 produce:6 telgarsky:1 converges:1 object:1 develop:2 completion:15 op:6 scary:1 solves:1 netrapalli:2 recovering:1 implemented:3 involves:1 come:1 implies:1 differ:2 ning:1 radius:1 subsequently:1 public:1 material:1 bin:1 require:1 suffices:2 generalization:1 tighter:1 rong:1 around:1 sufficiently:1 wright:2 algorithmic:3 matus:1 achieves:1 estimation:7 rpca:3 utexas:1 sensitive:1 title:1 largest:2 sivaraman:1 quanquan:2 ross:1 minimization:6 rough:2 gaussian:1 aim:1 avoid:1 cornell:2 varying:1 corollary:4 focus:1 rank:38 indicates:1 bernoulli:3 greatly:1 contrast:1 yingbin:1 entire:1 tao:1 provably:2 overall:3 issue:1 among:1 aforementioned:1 denoted:3 html:1 development:1 art:1 special:1 equal:1 santosh:1 saving:2 having:1 sampling:5 identical:2 represents:1 park:1 yu:1 qinqing:1 nearly:5 foreground:3 sanghavi:7 few:1 modern:1 frieze:1 simultaneously:2 qing:1 replaced:1 phase:18 psd:1 ab:1 highly:1 zheng:1 deferred:1 semidefinite:3 encourage:1 partial:15 huan:1 modest:1 sacrificing:2 minimal:1 instance:3 column:9 xeon:1 cost:4 stacking:1 subset:2 entry:24 successful:1 conducted:1 corrupted:3 synthetic:2 gd:13 ju:1 thanks:1 st:12 recht:2 siam:3 international:1 terence:1 together:2 squared:2 containing:1 choose:3 leveraged:1 worse:2 zhao:1 li:2 parrilo:1 star:1 summarized:2 b2:3 ialm:8 sec:2 zhaoran:3 satisfy:2 explicitly:2 break:1 analyze:1 apparently:1 characterizes:1 recover:4 sort:2 identifiability:1 slope:1 contribution:3 minimize:1 formed:1 accuracy:3 variance:1 produced:2 carlo:1 worth:1 corruption:22 definition:2 inexact:1 sixth:1 recovers:1 static:2 sampled:1 hsu:2 dataset:2 treatment:1 animashree:2 hardt:1 recall:1 knowledge:1 ut:43 improves:2 tolerate:5 higher:1 follow:3 specify:1 improved:2 done:1 though:1 spiky:1 evaded:1 sinit:10 quality:2 xiaodong:2 requiring:1 contain:1 true:1 multiplier:2 hence:2 regularization:1 alternating:7 moritz:1 nonzero:6 iteratively:1 criterion:1 leftmost:1 dohyung:1 delivers:1 bring:1 cp:1 image:1 wise:1 consideration:1 novel:2 recently:2 superior:1 rotation:2 extend:1 approximates:1 measurement:1 rd:4 sujay:7 mathematics:1 zhouchen:1 soltanolkotabi:2 moving:1 han:3 v0:24 recent:1 constantine:5 nonconvex:3 inequality:1 arbitrarily:1 pd2:1 vt:44 yi:4 exploited:1 seen:1 analyzes:2 additional:2 ruoyu:1 impose:1 zip:1 forty:2 v3:1 semi:1 ii:3 full:4 u0:25 sham:2 reduces:1 anastasios:1 stephen:1 alan:2 match:2 faster:5 long:1 retrieval:3 lin:1 manipulate:1 niranjan:1 paired:1 ensuring:1 regression:1 essentially:1 metric:2 arxiv:14 iteration:12 deterioration:1 c1:5 addition:2 whereas:2 background:6 singular:6 median:1 standpoint:1 quan:1 balakrishnan:1 flow:4 huishuai:1 seem:1 lafferty:1 t2p:2 anandkumar:2 near:4 leverage:2 wirtinger:2 exceed:1 revealed:1 easy:1 yang:1 restaurant:2 reduce:3 idea:2 kyrillidis:1 praneeth:2 texas:1 motivated:1 pca:16 ultimate:1 gb:1 clarkson:1 proceed:1 remark:5 matlab:1 dramatically:1 nuc:2 procrustes:1 http:3 generate:1 exist:3 per:1 discrete:1 dropping:1 key:1 putting:1 sheer:1 nevertheless:1 four:1 achieving:2 drawn:1 d3:2 ravi:1 kenneth:1 ram:1 relaxation:5 fraction:8 run:4 chandrasekaran:1 separation:5 bound:4 completing:1 guaranteed:3 followed:1 convergent:1 quadratic:3 identifiable:1 annual:5 precisely:2 constraint:2 speed:1 min:6 performing:3 vempala:1 martin:2 developing:2 according:3 smaller:1 em:4 kakade:2 qu:1 outlier:5 samplebased:1 computationally:1 equation:2 discus:1 r3:1 count:1 needed:1 ge:1 end:2 available:2 pursuit:2 apply:2 observe:1 distinguished:1 robustness:12 slower:2 eigen:1 original:3 denotes:1 running:19 subsampling:3 assumes:1 top:3 log2:1 hinge:1 emmanuel:5 tensor:2 objective:2 move:1 dependence:2 kak2:3 jalali:1 gradient:22 subspace:3 unable:1 manifold:1 considers:2 provable:4 willsky:1 kannan:1 assuming:1 code:3 index:3 modeled:1 liang:1 setup:1 proper:3 looseness:1 perform:3 twenty:1 upper:1 observation:19 datasets:2 benchmark:1 descent:8 truncated:1 immediate:1 frame:3 arbitrary:2 tuo:1 pablo:1 david:1 pair:4 namely:1 kl:1 c3:2 required:1 cast:1 c4:1 barcelona:1 nip:1 proceeds:1 below:3 krylov:1 perception:1 sparsity:6 including:3 max:1 video:4 memory:1 wainwright:2 power:2 critical:1 natural:3 rely:2 regularized:1 residual:1 normality:1 imply:1 numerous:1 library:1 incoherent:3 prior:1 literature:1 vectorizing:1 sg:1 understanding:1 relative:2 asymptotic:1 fully:9 loss:1 expect:1 interesting:1 versus:4 foundation:3 sufficient:2 principle:1 austin:1 row:11 last:1 enjoys:2 allow:2 fifth:2 sparse:23 ghz:1 yudong:5 dimension:7 stand:2 fb:2 author:3 adaptive:1 projected:7 transaction:6 keep:2 reveals:1 conclude:1 un:1 iterative:3 latent:1 robust:17 improving:1 unavailable:1 e5:1 meanwhile:1 sp:1 main:2 spread:2 big:2 xu:1 augmented:2 intel:1 venkat:1 tong:1 precision:1 fails:1 sub:5 position:1 lie:1 tied:1 mahdi:2 srinadh:2 theorem:11 altproj:10 showing:1 maxi:1 r2:13 virtue:1 kr:1 ci:2 magnitude:4 chen:7 sorting:1 gap:1 rd1:7 simply:1 jacm:2 lagrange:2 partially:8 corresponds:3 satisfies:5 dispersed:1 acm:7 ma:2 goal:3 presentation:1 sorted:1 specifically:1 uniformly:1 principal:2 i2r:1 total:1 svd:16 succeeds:2 yixy:1 exception:2 support:6 latter:1 d1:18 |
6,020 | 6,446 | Multimodal Residual Learning for Visual QA
Jin-Hwa Kim
Sang-Woo Lee Donghyun Kwak Min-Oh Heo
Seoul National University
{jhkim,slee,dhkwak,moheo}@bi.snu.ac.kr
Jeonghee Kim
Jung-Woo Ha
Naver Labs, Naver Corp.
{jeonghee.kim,jungwoo.ha}@navercorp.com
Byoung-Tak Zhang
Seoul National University & Surromind Robotics
btzhang@bi.snu.ac.kr
Abstract
Deep neural networks continue to advance the state-of-the-art of image recognition tasks with various methods. However, applications of these methods to
multimodality remain limited. We present Multimodal Residual Networks (MRN)
for the multimodal residual learning of visual question-answering, which extends
the idea of the deep residual learning. Unlike the deep residual learning, MRN
effectively learns the joint representation from vision and language information.
The main idea is to use element-wise multiplication for the joint residual mappings
exploiting the residual learning of the attentional models in recent studies. Various alternative models introduced by multimodality are explored based on our
study. We achieve the state-of-the-art results on the Visual QA dataset for both
Open-Ended and Multiple-Choice tasks. Moreover, we introduce a novel method
to visualize the attention effect of the joint representations for each learning block
using back-propagation algorithm, even though the visual features are collapsed
without spatial information.
1
Introduction
Visual question-answering tasks provide a testbed to cultivate the synergistic proposals which handle
multidisciplinary problems of vision, language and integrated reasoning. So, the visual questionanswering tasks let the studies in artificial intelligence go beyond narrow tasks. Furthermore, it may
help to solve the real world problems which need the integrated reasoning of vision and language.
Deep residual learning [6] not only advances the studies in object recognition problems, but also gives
a general framework for deep neural networks. The existing non-linear layers of neural networks
serve to fit another mapping of F(x), which is the residual of identity mapping x. So, with the
shortcut connection of identity mapping x, the whole module of layers fit F(x) + x for the desired
underlying mapping H(x). In other words, the only residual mapping F(x), defined by H(x) ? x, is
learned with non-linear layers. In this way, very deep neural networks effectively learn representations
in an efficient manner.
Many attentional models utilize the residual learning to deal with various tasks, including textual
reasoning [25, 21] and visual question-answering [29]. They use an attentional mechanism to handle
two different information sources, a query and the context of the query (e.g. contextual sentences
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Q
word
embedding
softmax
RNN
A
V
CNN
Linear
Multimodal Residual Networks
What kind of animals
are these ?
sheep
Q
V
Linear
Tanh
Linear
Tanh
Linear
Tanh
?
? H1
V
Linear
Tanh
Linear
Linear
Tanh
Linear
Tanh
?
? H2
V
Linear
Tanh
Linear
Figure 1: Inference flow of Multimodal Residual Networks (MRN). Using our visualization method, the attention effects are shown as a sequence of three images.
More examples are shown in Figure 4.
Linear
Tanh
Linear
Tanh
?
? H3
Linear
Softmax
Softmax
A
Figure 2: A schematic diagram of
Multimodal Residual Networks with
three-block layers.
or an image). The query is added to the output of the attentional module, that makes the attentional
module learn the residual of query mapping as in deep residual learning.
In this paper, we propose Multimodal Residual Networks (MRN) to learn multimodality of visual
question-answering tasks exploiting the excellence of deep residual learning [6]. MRN inherently
uses shortcuts and residual mappings for multimodality. We explore various models upon the
choice of the shortcuts for each modality, and the joint residual mappings based on element-wise
multiplication, which effectively learn the multimodal representations not using explicit attention
parameters. Figure 1 shows inference flow of the proposed MRN.
Additionally, we propose a novel method to visualize the attention effects of each joint residual
mapping. The visualization method uses back-propagation algorithm [22] for the difference between
the visual input and the output of the joint residual mapping. The difference is back-propagated up
to an input image. Since we use the pretrained visual features, the pretrained CNN is augmented
for visualization. Based on this, we argue that MRN is an implicit attention model without explicit
attention parameters.
Our contribution is three-fold: 1) extending the deep residual learning for visual question-answering
tasks. This method utilizes multimodal inputs, and allows a deeper network structure, 2) achieving the
state-of-the-art results on the Visual QA dataset for both Open-Ended and Multiple-Choice tasks, and
finally, 3) introducing a novel method to visualize spatial attention effect of joint residual mappings
from the collapsed visual feature using back-propagation.
2
2.1
Related Works
Deep Residual Learning
Deep residual learning [6] allows neural networks to have a deeper structure of over-100 layers. The
very deep neural networks are usually hard to be optimized even though the well-known activation
functions and regularization techniques are applied [17, 7, 9]. This method consistently shows
state-of-the-art results across multiple visual tasks including image classification, object detection,
localization and segmentation.
This idea assumes that a block of deep neural networks forming a non-linear mapping F(x) may
paradoxically fail to fit into an identity mapping. To resolve this, the deep residual learning adds
x to F(x) as a shortcut connection. With this idea, the non-linear mapping F(x) can focus on the
2
residual of the shortcut mapping x. Therefore, a learning block is defined as:
y = F(x) + x
where x and y are the input and output of the learning block, respectively.
2.2
(1)
Stacked Attention Networks
Stacked Attention Networks (SAN) [29] explicitly learns the weights of visual feature vectors to
select a small portion of visual information for a given question vector. Furthermore, this model stacks
the attention networks for multi-step reasoning narrowing down the selection of visual information.
For example, if the attention networks are asked to find a pink handbag in a scene, they try to find
pink objects first, and then, narrow down to the pink handbag.
For the attention networks, the weights are learned by a question vector and the corresponding
visual feature vectors. These weights are used for the linear combination of multiple visual feature
vectors indexing spatial information. Through this, SAN successfully selects a portion of visual
information. Finally, an addition of the combined visual feature vector and the previous question
vector is transferred as a new input question vector to next learning block.
qk = F(qk?1 , V) + qk?1
(2)
Here, ql is a question vector for l-th learning block and V is a visual feature matrix, whose columns
indicate the specific spatial indexes. F(q, V) is the attention networks of SAN.
3
Multimodal Residual Networks
Deep residual learning emphasizes the importance of identity (or linear) shortcuts to have the nonlinear mappings efficiently learn only residuals [6]. In multimodal learning, this idea may not
be readily applied. Since the modalities may have correlations, we need to carefully define joint
residual functions as the non-linear mappings. Moreover, the shortcuts are undetermined due to its
multimodality. Therefore, the characteristics of a given task ought to be considered to determine the
model structure.
3.1
Background
We infer a residual learning in the attention networks of SAN. Since Equation 18 in [29] shows a
question vector transferred directly through successive layers of the attention networks. In the case of
SAN, the shortcut mapping is for the question vector, and the non-linear mapping is the attention
networks.
In the attention networks, Yang et al. [29] assume that an appropriate choice of weights on visual
feature vectors for a given question vector sufficiently captures the joint representation for answering.
However, question information weakly contributes to the joint representation only through coefficients
p, which may cause a bottleneck to learn the joint representation.
X
F(q, V) =
pi Vi
(3)
i
The coefficients p are the output of a nonlinear function of a question vector q and a visual feature
matrix V (see Equation 15-16 in Yang et al. [29]). The Vi is a visual feature vector of spatial index i
in 14 ? 14 grids.
Lu et al. [15] propose an element-wise multiplication of a question vector and a visual feature vector
after appropriate embeddings for a joint model. This makes a strong baseline outperforming some of
the recent works [19, 2]. We firstly take this approach as a candidate for the joint residual function,
since it is simple yet successful for visual question-answering. In this context, we take the global
visual feature approach for the element-wise multiplication, instead of the multiple (spatial) visual
features approach for the explicit attention mechanism of SAN. (We present a visualization technique
exploiting the element-wise multiplication in Section 5.2.)
Based on these observations, we follow the shortcut mapping and the stacking architecture of SAN
[29]; however, the element-wise multiplication is used for the joint residual function F. These
updates effectively learn the joint representation of given vision and language information addressing
the bottleneck issue of the attention networks of SAN.
3
(a)
Linear
(d)
if l=1
Linear
else
Identity
Q
V
Linear
Tanh
Linear
Tanh
(b)
?
? Hl
V
Q
V
Linear
Tanh
Linear
Tanh
Linear
Tanh
?
? Hl
Linear
Q
V
Linear
Tanh
Linear
Tanh
Linear
Tanh
?
? Hl
(c)
V
Linear
Q
(e)
V
Linear
Tanh
Linear
V
?
Hl
?
if l=1
Linear
else
none
Q
V
Linear
Tanh
Linear
Tanh
Linear
Tanh
Linear
Tanh
?
? Hl
V
Linear
Tanh
Linear
Tanh
V
Figure 3: Alternative models are explored to justify our proposed model. The base model (a) has a
shortcut for a question vector as SAN does [29], and the joint residual function takes the form of the
Deep Q+I model?s joint function [15]. (b) extra embedding for visual modality. (c) extra embeddings
for both modalities. (d) identity mappings for shortcuts. In the first learning block, use a linear
mapping for matching a dimension with the joint dimension. (e) two shortcuts for both modalities.
For simplicity, the linear mapping of visual shortcut only appears in the first learning block. Notice
that (d) and (e) are compared to (b) after the model selection of (b) among (a)-(c) on test-dev results.
Eventually, we chose (b) as the best performance and relative simplicity.
3.2
Multimodal Residual Networks
MRN consists of multiple learning blocks, which are stacked for deep residual learning. Denoting an
optimal mapping by H(q, v), we approximate it using
(1)
H1 (q, v) = Wq0 q + F (1) (q, v).
(4)
(1)
The first (linear) approximation term is Wq0 q and the first joint residual function is given by
F (1) (q, v). The linear mapping Wq0 is used for matching a feature dimension. We define the joint
residual function as
(k)
(k)
F (k) (q, v) = ?(Wq(k) q) ?(W2 ?(W1 v))
(5)
where ? is tanh, and is element-wise multiplication. The question vector and the visual feature
vector directly contribute to the joint representation. We justify this choice in Sections 4 and 5.
For a deeper residual learning, we replace q with H1 (q, v) in the next layer. In more general terms,
Equations 4 and 5 can be rewritten as
HL (q, v) = W q +
q0
L
X
WF (l) F (l) (Hl?1 , v)
(6)
l=1
(l)
(m)
L
where L is the number of learning blocks, H0 = q, Wq0 = ?L
l=1 Wq0 , and WF (l) = ?m=l+1 Wq0 .
The cascading in Equation 6 can intuitively be represented as shown in Figure 2. Notice that the
shortcuts for a visual part are identity mappings to transfer the input visual feature vector to each
layer (dashed line). At the end of each block, we denote Hl as the output of the l-th learning block,
and ? is element-wise addition.
4
4.1
Experiments
Visual QA Dataset
We choose the Visual QA (VQA) dataset [1] for the evaluation of our models. Other datasets may
not be ideal, since they have limited number of examples to train and test [16], or have synthesized
questions from the image captions [14, 20].
4
Table 1: The results of alternative models (a)(e) on the test-dev.
Table 2: The effect of the visual features and #
of target answers on the test-dev results. Vgg
for VGG-19, and Res for ResNet-152 features described in Section 4.
Open-Ended
(a)
(b)
(c)
(d)
(e)
Open-Ended
All
Y/N
Num.
Other
60.17
60.53
60.19
59.69
60.20
81.83
82.53
81.91
81.67
81.98
38.32
38.34
37.87
37.23
38.25
46.61
46.78
46.70
46.00
46.57
Vgg, 1k
Vgg, 2k
Vgg, 3k
Res, 1k
Res, 2k
Res, 3k
All
Y/N
Num.
Other
60.53
60.77
60.68
61.45
61.68
61.47
82.53
82.10
82.40
82.36
82.28
82.28
38.34
39.11
38.69
38.40
38.82
39.09
46.78
47.46
47.10
48.81
49.25
48.76
The questions and answers of the VQA dataset are collected via Amazon Mechanical Turk from
human subjects, who satisfy the experimental requirement. The dataset includes 614,163 questions
and 7,984,119 answers, since ten answers are gathered for each question from unique human subjects.
Therefore, Agrawal et al. [1] proposed a new accuracy metric as follows:
# of humans that provided that answer
min
,1 .
(7)
3
The questions are answered in two ways: Open-Ended and Multiple-Choice. Unlike Open-Ended,
Multiple-Choice allows additional information of eighteen candidate answers for each question. There
are three types of answers: yes/no (Y/N), numbers (Num.) and others (Other). Table 3 shows that
Other type has the most benefit from Multiple-Choice.
The images come from the MS-COCO dataset, 123,287 of them for training and validation, and
81,434 for test. The images are carefully collected to contain multiple objects and natural situations,
which is also valid for visual question-answering tasks.
4.2
Implementation
Torch framework and rnn package [13] are used to build our models. For efficient computation of
variable-length questions, TrimZero is used to trim out zero vectors [11]. TrimZero eliminates zero
computations at every time-step in mini-batch learning. Its efficiency is affected by a batch size, RNN
model size, and the number of zeros in inputs. We found out that TrimZero was suitable for VQA
tasks. Approximately, 37.5% of training time is reduced in our experiments using this technique.
Preprocessing We follow the same preprocessing procedure of DeeperLSTM+NormalizedCNN
[15] (Deep Q+I) by default. The number of answers is 1k, 2k, or 3k using the most frequent answers,
which covers 86.52%, 90.45% and 92.42% of questions, respectively. The questions are tokenized
using Python Natural Language Toolkit (nltk) [3]. Subsequently, the vocabulary sizes are 14,770,
15,031 and 15,169, respectively.
Pretrained Models A question vector q ? R2,400 is the last output vector of GRU [4], initialized
with the parameters of Skip-Thought Vectors [12]. Based on the study of Noh et al. [19], this method
shows effectiveness of question embedding in visual question-answering tasks. A visual feature vector
v is an output of the first fully-connected layer of VGG-19 networks [23], whose dimension is 4,096.
Alternatively, ResNet-152 [6] is used, whose dimension is of 2,048. The error is back-propagated to
the input question for fine-tuning, yet, not for the visual part v due to the heavy computational cost
of training.
Postprocessing Image captioning model [10] is used to improve the accuracy of Other type. Let the
intermediate representation v ? R|?| which is right before applying softmax. |?| is the vocabulary
size of answers, and vi is corresponding to answer ai . If ai is not a number or yes or no, and appeared
at least once in the generated caption, then update vi ? vi + 1. Notice that the pretrained image
captioning model is not part of training. This simple procedure improves around 0.1% of the test-dev
5
Table 3: The VQA test-standard results. The precision of some accuracies [29, 2] are one less than
others, so, zero-filled to match others.
Open-Ended
Multiple-Choice
All
Y/N
Num.
Other
All
Y/N
Num.
Other
DPPnet [19]
D-NMN [2]
Deep Q+I [15]
SAN [29]
ACK [27]
FDA [8]
DMN+ [28]
57.36
58.00
58.16
58.90
59.44
59.54
60.36
80.28
80.56
81.07
81.34
80.43
36.92
36.53
37.12
35.67
36.82
42.24
43.73
45.83
46.10
48.33
62.69
63.09
64.18
-
80.35
80.59
81.25
-
38.79
37.70
38.30
-
52.79
53.64
55.20
-
MRN
61.84
82.39
38.23
49.41
66.33
82.41
39.57
58.40
Human [1]
83.30
95.77
83.39
72.67
-
-
-
-
overall accuracy (0.3% for Other type). We attribute this improvement to ?tie break? in Other type.
For the Multiple-Choice task, we mask the output of softmax layer with the given candidate answers.
Hyperparameters By default, we follow Deep Q+I. The common embedding size of the joint
representation is 1,200. The learnable parameters are initialized using a uniform distribution from
?0.08 to 0.08 except for the pretrained models. The batch size is 200, and the number of iterations
is fixed to 250k. The RMSProp [26] is used for optimization, and dropouts [7, 5] are used for
regularization. The hyperparameters are fixed using test-dev results. We compare our method to
state-of-the-arts using test-standard results.
4.3
Exploring Alternative Models
Figure 3 shows alternative models we explored, based on the observations in Section 3. We carefully
select alternative models (a)-(c) for the importance of embeddings in multimodal learning [18, 24],
(d) for the effectiveness of identity mapping as reported by [6], and (e) for the confirmation of using
question-only shortcuts in the multiple blocks as in [29]. For comparison, all models have three-block
layers (selected after a pilot test), using VGG-19 features and 1k answers, then, the number of learning
blocks is explored to confirm the pilot test. The effect of the pretrained visual feature models and the
number of answers are also explored. All validation is performed on the test-dev split.
5
5.1
Results
Quantitative Analysis
The VQA Challenge, which released the VQA dataset, provides evaluation servers for test-dev
and test-standard test splits. For the test-dev, the evaluation server permits unlimited submissions
for validation, while the test-standard permits limited submissions for the competition. We report
accuracies in percentage.
Alternative Models The test-dev results of the alternative models for the Open-Ended task are
shown in Table 1. (a) shows a significant improvement over SAN. However, (b) is marginally better
than (a). As compared to (b), (c) deteriorates the performance. An extra embedding for a question
vector may easily cause overfitting leading to the overall degradation. And, the identity shortcuts
in (d) cause the degradation problem, too. Extra parameters of the linear mappings may effectively
support to do the task.
(e) shows a reasonable performance, however, the extra shortcut is not essential. The empirical results
seem to support this idea. Since the question-only model (50.39%) achieves a competitive result to
the joint model (57.75%), while the image-only model gets a poor accuracy (28.13%) (see Table 2 in
[1]). Eventually, we chose model (b) as the best performance and relative simplicity.
6
examples examples
(a)
What kind of animals are these ? sheep
(b)
What animal is the picture ? elephant
(c)
What is this animal ? zebra
(d)
What game is this person playing ? tennis
(e)
How many cats are here ? 2
(f)
What color is the bird ? yellow
(g)
What sport is this ? surfing
(h)
Is the horse jumping ? yes
Figure 4: Examples for visualization of the three-block layered MRN. The original images are shown
in the first of each group. The next three images show the input gradients of the attention effect for
each learning block as described in Section 5.2. The gradients of color channels for each pixel are
summed up after taking absolute values of these gradients. Then, these summed absolute values
which are greater than the summation of the mean and the standard deviation of these values are
visualized as the attention effect (bright color) on the images. The answers (blue) are predicted by
MRN.
The effects of other various options, Skip-Thought Vectors [12] for parameter initialization, Bayesian
Dropout [5] for regularization, image captioning model [10] for postprocessing, and the usage of
shortcut connections, are explored in Appendix A.1.
Number of Learning Blocks To confirm the effectiveness of the number of learning blocks selected
via a pilot test (L = 3), we explore this on the chosen model (b), again. As the depth increases, the
overall accuracies are 58.85% (L = 1), 59.44% (L = 2), 60.53% (L = 3) and 60.42% (L = 4).
Visual Features The ResNet-152 visual features are significantly better than VGG-19 features for
Other type in Table 2, even if the dimension of the ResNet features (2,048) is a half of VGG features?
(4,096). The ResNet visual features are also used in the previous work [8]; however, our model
achieves a remarkably better performance with a large margin (see Table 3).
Number of Target Answers The number of target answers slightly affects the overall accuracies
with the trade-off among answer types. So, the decision on the number of target answers is difficult
to be made. We chose Res, 2k in Table 2 based on the overall accuracy (for Multiple-Choice task, see
Appendix A.1).
Comparisons with State-of-the-arts Our chosen model significantly outperforms other state-ofthe-art methods for both Open-Ended and Multiple-Choice tasks in Table 3. However, the performance
of Number and Other types are still not satisfactory compared to Human performance, though the
advances in the recent works were mainly for Other-type answers. This fact motivates to study on a
counting mechanism in future work. The model comparison is performed on the test-standard results.
7
5.2
Qualitative Analysis
In Equation 5, the left term ?(Wq q) can be seen as a masking (attention) vector to select a part of
visual information. We assume that the difference between the right term V := ?(W2 ?(W1 v)) and
the masked vector F(q, v) indicates an attention effect caused by the masking vector. Then, the
attention effect Latt = 12 kV ? Fk2 is visualized on the image by calculating the gradient of Latt with
respect to a given image I, while treating F as a constant.
?Latt
?V
=
(V ? F)
?I
?I
(8)
This technique can be applied to each learning block in a similar way.
Since we use the preprocessed visual features, the pretrained CNN is augmented only for this
visualization. Note that model (b) in Table 1 is used for this visualization, and the pretrained VGG-19
is used for preprocessing and augmentation. The model is trained using the training set of the VQA
dataset, and visualized using the validation set. Examples are shown in Figure 4 (more examples in
Appendix A.2-4).
Unlike the other works [29, 28] that use explicit attention parameters, MRN does not use any explicit
attentional mechanism. However, we observe the interpretability of element-wise multiplication as
an information masking, which yields a novel method for visualizing the attention effect from this
operation. Since MRN does not depend on a few attention parameters (e.g. 14 ? 14), our visualization
method shows a higher resolution than others [29, 28]. Based on this, we argue that MRN is an
implicit attention model without explicit attention mechanism.
6
Conclusions
The idea of deep residual learning is applied to visual question-answering tasks. Based on the two
observations of the previous works, various alternative models are suggested and validated to propose
the three-block layered MRN. Our model achieves the state-of-the-art results on the VQA dataset
for both Open-Ended and Multiple-Choice tasks. Moreover, we have introduced a novel method to
visualize the spatial attention from the collapsed visual features using back-propagation.
We believe our visualization method brings implicit attention mechanism to research of attentional
models. Using back-propagation of attention effect, extensive research in object detection, segmentation and tracking are worth further investigations.
Acknowledgments
The authors would like to thank Patrick Emaase for helpful comments and editing. This work was
supported by Naver Corp. and partly by the Korea government (IITP-R0126-16-1072-SW.StarLab,
KEIT-10044009-HRI.MESSI, KEIT-10060086-RISF, ADD-UD130070ID-BMRR).
References
[1] Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick,
Dhruv Batra, and Devi Parikh. VQA: Visual Question Answering. In International Conference
on Computer Vision, 2015.
[2] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to Compose Neural
Networks for Question Answering. arXiv preprint arXiv:1601.01705, 2016.
[3] Steven Bird, Ewan Klein, and Edward Loper. Natural language processing with Python.
O?Reilly Media, Inc., 2009.
[4] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the
Properties of Neural Machine Translation: Encoder-Decoder Approaches. arXiv preprint
arXiv:1409.1259, 2014.
[5] Yarin Gal. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks.
arXiv preprint arXiv:1512.05287, 2015.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image
Recognition. arXiv preprint arXiv:1512.03385, 2015.
8
[7] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv
preprint arXiv:1207.0580, 2012.
[8] Ilija Ilievski, Shuicheng Yan, and Jiashi Feng. A Focused Dynamic Attention Model for Visual
Question Answering. arXiv preprint arXiv:1604.01485, 2016.
[9] Sergey Ioffe and Christian Szegedy. Batch Normalization : Accelerating Deep Network Training
by Reducing Internal Covariate Shift. In Proceedings of the 32nd International Conference on
Machine Learning, 2015.
[10] Andrej Karpathy and Li Fei-Fei. Deep Visual-Semantic Alignments for Generating Image
Descriptions. In 28th IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[11] Jin-Hwa Kim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. TrimZero: A Torch
Recurrent Module for Efficient Natural Language Processing. In Proceedings of KIIS Spring
Conference, volume 26, pages 165?166, 2016.
[12] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel
Urtasun, and Sanja Fidler. Skip-Thought Vectors. arXiv preprint arXiv:1506.06726, 2015.
[13] Nicholas L?onard, Sagar Waghmare, Yang Wang, and Jin-Hwa Kim. rnn : Recurrent Library
for Torch. arXiv preprint arXiv:1511.07889, 2015.
[14] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr
Doll?r, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In European
Conference on Computer Vision, pages 740?755. Springer, 2014.
[15] Jiasen Lu, Xiao Lin, Dhruv Batra, and Devi Parikh. Deeper LSTM and normalized CNN Visual
Question Answering model. https://github.com/VT-vision-lab/VQA_LSTM_CNN, 2015.
[16] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask Your Neurons: A Neural-based
Approach to Answering Questions about Images. arXiv preprint arXiv:1505.01121, 2015.
[17] Vinod Nair and Geoffrey E Hinton. Rectified Linear Units Improve Restricted Boltzmann
Machines. Proceedings of the 27th International Conference on Machine Learning, 2010.
[18] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng.
Multimodal Deep Learning. In Proceedings of The 28th International Conference on Machine
Learning, pages 689?696, 2011. ISBN 9781450306195.
[19] Hyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han. Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction. arXiv preprint
arXiv:1511.05756, 2015.
[20] Mengye Ren, Ryan Kiros, and Richard Zemel. Exploring Models and Data for Image Question
Answering. In Advances in Neural Information Processing Systems 28, 2015.
[21] Tim Rockt?schel, Edward Grefenstette, Karl Moritz Hermann, Tom?? Ko?cisk?, and Phil Blunsom. Reasoning about Entailment with Neural Attention. In International Conference on
Learning Representations, pages 1?9, 2016.
[22] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by
back-propagating errors. Nature, 323(6088):533?536, 1986.
[23] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale
Image Recognition. In International Conference on Learning Representations, 2015.
[24] Nitish Srivastava and Ruslan R Salakhutdinov. Multimodal Learning with Deep Boltzmann
Machines. In Advances in Neural Information Processing Systems 25, pages 2222?2230. 2012.
[25] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory
Networks. In Advances in Neural Information Processing Systems 28, pages 2440?2448, 2015.
[26] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running
average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.
[27] Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. Ask Me
Anything: Free-form Visual Question Answering Based on Knowledge from External Sources.
In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
[28] Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic Memory Networks for Visual
and Textual Question Answering. arXiv preprint arXiv:1603.01417, 2016.
[29] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked Attention
Networks for Image Question Answering. arXiv preprint arXiv:1511.02274, 2015.
9
| 6446 |@word cnn:4 nd:1 open:10 shuicheng:1 jacob:1 slee:1 mengye:1 denoting:1 outperforms:1 existing:1 com:2 contextual:1 activation:1 yet:2 readily:1 ronald:1 latt:3 christian:1 treating:1 update:2 bart:1 sukhbaatar:1 intelligence:1 selected:2 half:1 num:5 provides:1 contribute:1 successive:1 firstly:1 zhang:3 qualitative:1 naver:3 consists:1 dan:1 compose:1 multimodality:5 manner:1 introduce:1 theoretically:1 excellence:1 peng:1 mask:1 merity:1 ilievski:1 kiros:2 multi:1 salakhutdinov:3 resolve:1 btzhang:1 spain:1 moreover:3 underlying:1 provided:1 medium:1 surfing:1 what:7 kind:2 vqa_lstm_cnn:1 gal:1 ended:10 ought:1 quantitative:1 every:1 tie:1 ramanan:1 unit:1 szlam:1 before:1 approximately:1 blunsom:1 chose:3 bird:2 initialization:1 co:1 limited:3 bi:2 unique:1 acknowledgment:1 block:22 procedure:2 maire:1 dmn:1 rnn:4 empirical:1 yan:1 thought:3 significantly:2 matching:2 reilly:1 word:2 onard:1 mingyu:1 get:1 synergistic:1 selection:2 layered:2 andrej:1 collapsed:3 context:3 applying:1 phil:1 go:1 attention:35 williams:1 focused:1 resolution:1 shen:1 simplicity:3 amazon:1 keit:2 cascading:1 nam:1 oh:1 embedding:5 handle:2 target:4 caption:2 us:2 element:9 rumelhart:1 recognition:6 submission:2 narrowing:1 steven:1 module:4 preprint:12 wang:2 capture:1 connected:1 sun:1 coursera:1 iitp:1 trade:1 rmsprop:2 asked:1 hri:1 dynamic:3 trained:1 weakly:1 depend:1 deva:1 serve:1 upon:1 localization:1 efficiency:1 multimodal:15 joint:23 easily:1 various:6 represented:1 cat:1 stacked:4 train:1 artificial:1 query:4 horse:1 zemel:2 jianfeng:1 h0:1 whose:3 solve:1 elephant:1 encoder:1 simonyan:1 sequence:1 agrawal:2 isbn:1 propose:4 adaptation:1 frequent:1 achieve:1 margaret:1 description:1 kv:1 competition:1 exploiting:3 sutskever:1 requirement:1 extending:1 darrell:1 captioning:3 aishwarya:1 generating:1 object:6 help:1 resnet:5 recurrent:3 ac:2 propagating:1 andrew:2 tim:1 h3:1 strong:1 edward:2 predicted:1 skip:3 indicate:1 come:1 hermann:1 attribute:1 subsequently:1 human:5 government:1 investigation:1 merrienboer:1 ryan:2 summation:1 sainbayar:1 exploring:2 sufficiently:1 considered:1 around:1 dhruv:2 lawrence:2 mapping:29 visualize:4 achieves:3 torralba:1 released:1 ruslan:3 tanh:25 seo:1 successfully:1 validated:1 focus:1 loper:1 improvement:2 consistently:1 kwak:1 indicates:1 mainly:1 kim:7 baseline:1 wf:2 helpful:1 inference:2 integrated:2 torch:3 perona:1 tak:2 selects:1 pixel:1 issue:1 classification:1 among:2 noh:2 overall:5 animal:4 art:8 spatial:7 softmax:5 summed:2 once:1 piotr:1 ng:1 future:1 others:4 report:1 yoshua:1 richard:3 few:1 national:2 microsoft:1 detection:2 evaluation:3 sheep:2 alignment:1 antol:1 jeonghee:3 arthur:1 jumping:1 korea:1 filled:1 divide:1 initialized:2 desired:1 re:5 column:1 dev:9 cover:1 heo:1 stacking:1 introducing:1 addressing:1 cost:1 deviation:1 undetermined:1 uniform:1 masked:1 krizhevsky:1 successful:1 jiashi:1 too:1 reported:1 answer:20 combined:1 cho:1 person:1 fritz:1 international:6 lstm:1 lee:2 off:1 michael:1 ilya:1 w1:2 again:1 augmentation:1 choose:1 external:1 leading:1 sang:1 li:2 szegedy:1 includes:1 coefficient:2 inc:1 juhan:1 satisfy:1 explicitly:1 caused:1 vi:5 tsung:1 performed:2 h1:3 jason:1 lab:2 try:1 break:1 mario:1 portion:2 competitive:1 option:1 masking:3 contribution:1 hwa:3 bright:1 accuracy:9 convolutional:2 qk:3 characteristic:1 efficiently:1 who:1 gathered:1 ofthe:1 yield:1 serge:1 yes:3 yellow:1 anton:1 bayesian:1 emphasizes:1 lu:3 none:1 marginally:1 worth:1 ren:2 rectified:1 detector:1 trevor:1 turk:1 james:1 propagated:2 pilot:3 dataset:10 ask:2 mitchell:1 color:3 knowledge:1 improves:1 segmentation:2 carefully:3 back:8 appears:1 higher:1 follow:3 tom:1 zisserman:1 editing:1 entailment:1 though:3 furthermore:2 implicit:3 smola:1 correlation:1 nonlinear:2 propagation:5 multidisciplinary:1 brings:1 believe:1 xiaodong:1 effect:13 usage:1 contain:1 normalized:1 regularization:3 kyunghyun:1 fidler:1 moritz:1 q0:1 satisfactory:1 nmn:1 semantic:1 deal:1 visualizing:1 game:1 anything:1 m:1 ack:1 reasoning:5 postprocessing:2 image:24 wise:9 cisk:1 novel:5 parikh:2 common:2 volume:1 he:2 synthesized:1 significant:1 honglak:1 ai:2 zebra:1 tuning:1 grid:1 language:7 toolkit:1 sanja:1 tennis:1 han:1 add:2 base:1 patrick:1 recent:4 coco:2 chunhua:1 corp:2 server:2 hay:1 outperforming:1 continue:1 vt:1 yi:1 seen:1 additional:1 greater:1 deng:1 determine:1 mrn:15 xiangyu:1 dashed:1 stephen:1 multiple:16 infer:1 match:1 lin:2 schematic:1 prediction:1 qi:1 ko:1 vision:9 metric:1 arxiv:24 iteration:1 grounded:1 sergey:1 normalization:1 robotics:1 proposal:1 addition:2 background:1 fine:1 remarkably:1 diagram:1 else:2 source:2 jian:1 modality:5 extra:5 w2:2 unlike:3 eliminates:1 comment:1 subject:2 bahdanau:1 bohyung:1 flow:2 effectiveness:3 seem:1 schel:1 yang:4 ideal:1 intermediate:1 split:2 embeddings:3 counting:1 bengio:1 paradoxically:1 affect:1 fit:3 vinod:1 architecture:1 andreas:1 idea:7 vgg:10 shift:1 bottleneck:2 accelerating:1 hyeonwoo:1 karen:1 fk2:1 shaoqing:1 cause:3 deep:27 antonio:1 malinowski:1 vqa:9 karpathy:1 ten:1 visualized:3 reduced:1 http:1 percentage:1 notice:3 deteriorates:1 klein:2 blue:1 affected:1 group:1 achieving:1 preprocessed:1 utilize:1 pietro:1 package:1 raquel:1 extends:1 reasonable:1 wu:1 utilizes:1 decision:1 appendix:3 jiquan:1 dropout:3 layer:11 fold:1 alex:2 fei:2 scene:1 fda:1 your:1 unlimited:1 answered:1 nitish:2 min:2 spring:1 transferred:2 combination:1 poor:1 pink:3 remain:1 byoung:2 across:1 slightly:1 snu:2 rob:1 hl:8 intuitively:1 restricted:1 indexing:1 den:1 jiasen:2 equation:5 visualization:9 dppnet:1 eventually:2 mechanism:6 fail:1 end:3 hongsuck:1 operation:1 rewritten:1 permit:2 doll:1 observe:1 appropriate:2 caiming:1 nicholas:1 xiong:1 alternative:9 batch:4 original:1 assumes:1 running:1 sw:1 donghyun:1 calculating:1 build:1 feng:1 question:47 added:1 ewan:1 gradient:5 attentional:7 thank:1 decoder:1 me:1 argue:2 collected:2 urtasun:1 stanislaw:1 dzmitry:1 marcus:2 tokenized:1 length:1 index:2 mini:1 tijmen:1 dick:1 zichao:1 ql:1 difficult:1 implementation:1 motivates:1 boltzmann:2 observation:3 neuron:1 datasets:1 eighteen:1 jin:3 rockt:1 situation:1 hinton:4 stack:1 introduced:2 david:1 mechanical:1 gru:1 extensive:1 connection:3 sentence:1 optimized:1 learned:2 narrow:2 testbed:1 textual:2 barcelona:1 nip:1 qa:5 beyond:1 suggested:1 usually:1 pattern:2 mateusz:1 appeared:1 challenge:1 including:2 interpretability:1 memory:2 suitable:1 natural:4 residual:43 zhu:1 improve:2 github:1 library:1 picture:1 woo:3 python:2 multiplication:8 relative:2 fully:1 lecture:1 geoffrey:4 validation:4 h2:1 xiao:1 playing:1 pi:1 heavy:1 translation:1 karl:1 jung:2 supported:1 last:1 free:1 deeper:4 taking:1 absolute:2 benefit:1 van:2 dimension:6 default:2 world:1 valid:1 vocabulary:2 depth:1 preventing:1 author:1 made:1 hengel:1 san:11 preprocessing:3 approximate:1 trim:1 confirm:2 global:1 overfitting:1 ioffe:1 belongie:1 fergus:1 alternatively:1 khosla:1 ilija:1 table:11 additionally:1 nature:1 learn:7 transfer:1 confirmation:1 inherently:1 channel:1 contributes:1 improving:1 ngiam:1 european:1 anthony:1 zitnick:2 main:1 whole:1 hyperparameters:2 paul:1 yarin:1 augmented:2 precision:1 explicit:6 candidate:3 answering:20 learns:2 down:2 nltk:1 specific:1 covariate:1 learnable:1 explored:6 r2:1 essential:1 socher:1 effectively:5 kr:2 importance:2 magnitude:1 margin:1 explore:2 forming:1 devi:2 visual:55 rohrbach:2 gao:1 aditya:1 tracking:1 sport:1 kaiming:1 pretrained:8 springer:1 tieleman:1 nair:1 grefenstette:1 yukun:1 weston:1 identity:9 replace:1 shortcut:18 hard:1 except:1 reducing:1 justify:2 degradation:2 batra:2 partly:1 experimental:1 select:3 wq:2 support:2 internal:1 seoul:2 handbag:2 srivastava:2 |
6,021 | 6,447 | The Power of Optimization from Samples
Eric Balkanski
Harvard University
ericbalkanski@g.harvard.edu
Aviad Rubinstein
University of California, Berkeley
aviad@eecs.berkeley.edu
Yaron Singer
Harvard University
yaron@seas.harvard.edu
Abstract
We consider the problem of optimization from samples of monotone submodular
functions with bounded curvature. In numerous applications, the function optimized is not known a priori, but instead learned from data. What are the guarantees
we have when optimizing functions from sampled data?
In this paper we show that for any monotone submodular function with curvature
c there is a (1 c)/(1 + c c2 ) approximation algorithm for maximization
under cardinality constraints when polynomially-many samples are drawn from the
uniform distribution over feasible sets. Moreover, we show that this algorithm is
optimal. That is, for any c < 1, there exists a submodular function with curvature
c for which no algorithm can achieve a better approximation. The curvature
assumption is crucial as for general monotone submodular functions no algorithm
can obtain a constant-factor approximation for maximization under a cardinality
constraint when observing polynomially-many samples drawn from any distribution
over feasible sets, even when the function is statistically learnable.
1
Introduction
Traditionally, machine learning is concerned with predictions: assuming data is generated from some
model, the goal is to predict the behavior of the model on data similar to that observed. In many cases
however, we harness machine learning to make decisions: given observations from a model the goal
is to find its optimum, rather than predict its behavior. Some examples include:
? Ranking in information retrieval: In ranking the goal is to select k 2 N documents that
are most relevant for a given query. The underlying model is a function which maps a set
of documents and a given query to its relevance score. Typically we do not to have access
to the scoring function, and thus learn it from data. In the learning to rank framework, for
example, the input consists of observations of document-query pairs and their relevance
score. The goal is to construct a scoring function of query-document pairs so that given a
query we can decide on the k most relevant documents.
? Optimal tagging: The problem of optimal tagging consists of picking k tags for some new
content to maximize incoming traffic. The model is a function which captures the way in
which users navigate through content given their tags. Since the algorithm designer cannot
know the behavior of every online user, the model is learned from observations on user
navigation in order to make a decision on which k tags maximize incoming traffic.
? Influence in networks: In influence maximization the goal is to identify a subset of individuals who can spread information in a manner that generates a large cascade. The underlying
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
assumption is that there is a model of influence that governs the way in which individuals
forward information from one to another. Since the model of influence is not known, it is
learned from data. The observed data is pairs of a subset of nodes who initiated a cascade
and the total number of individuals influenced. The decision is the optimal set of influencers.
In the interest of maintaining theoretical guarantees on the decisions, we often assume that the
generative model has some structure which is amenable to optimization. When the decision variables
are discrete quantities a natural structure for the model is submodularity. A function f : 2N ! R
defined over a ground set N = {e1 , . . . , en } of elements is submodular if it exhibits a diminishing
marginal returns property, i.e., fS (e) fT (e) for all sets S ? T ? N and element e 62 T where
fS (e) = f (S [ e) f (S) is the marginal contribution of element e to set S ? N . This diminishing
returns property encapsulates numerous applications in machine learning and data mining and is
particularly appealing due to its theoretical guarantees on optimization (see related work below).
The guarantees on optimization of submodular functions apply to the case in which the algorithm
designer has access to some succinct description of the function, or alternatively some idealized value
oracle which allows querying for function values of any given set. In numerous settings such as in
the above examples, we do not have access to the function or its value oracle, but rather learn the
function from observed data. If the function learned from data is submodular we can optimize it
and obtain a solution with provable guarantees on the learned model. But how do the guarantees
of this solution on the learned model relate to its guarantees on the generative model? If we obtain
an approximate optimum on the learned model which turns out to be far from the optimum of the
submodular function we aim to optimize, the provable guarantees at hand do not apply.
Optimization from samples. For concreteness, suppose that the generative model is a monotone
submodular function f : 2N ! R and we wish to find a solution to maxS:|S|?k f (S). To formalize
the concept of observations in standard learning-theoretic terms, we can assume that we observe samples of sets drawn from some distribution D and their function values, i.e. {(Si , f (Si ))}m
i=1 . In terms
of learnability, under some assumptions about the distribution and the function, submodular functions are statistically learnable (see discussion about PMAC learnability). In terms of approximation
guarantees for optimization, a simple greedy algorithm obtains a 1 1/e-approximation.
Recent work shows that optimization from samples is generally impossible [4], even for models that
are learnable and optimizable. In particular, even for maximizing coverage functions, which are a
special case of submodular functions and widely used in practice, no algorithm can obtain a constant
factor approximation using fewer than exponentially many samples of feasible solutions drawn from
any distribution. In practice however, the functions we aim to optimize may be better behaved.
An important property of submodular functions that has been heavily explored recently is that of
curvature. Informally, the curvature
P is a measure of how far the function is to being modular. A
function f is modular if f (S) = e2S f (e), and has curvature c 2 [0, 1] if fS (e) (1 c)f (e) for
any S ? N . Curvature plays an important role since the hard instances of submodular optimization
often occur only when the curvature is unbounded, i.e., c close to 1. The hardness results for
optimization from samples are no different, and apply when the curvature is unbounded.
What are the guarantees for optimization from samples of submodular functions
with bounded curvature?
In this paper we study the power of optimization from samples when the curvature is bounded. Our
main result shows that for any monotone submodular function with curvature c there is an algorithm
which observes polynomially-many samples from the uniform distribution over feasible sets and
obtains an approximation ratio of (1 c)/(1 + c c2 ) o(1). Furthermore, we show that this bound
is tight. For any c < 1, there exist monotone submodular functions with curvature c for which no
algorithm can obtain an approximation better than (1 c)/(1 + c c2 ) + o(1) given polynomially
many samples. We also perform experiments on synthetic hard instances of monotone submodular
functions that convey some interpretation of our results.
For the case of modular functions a 1 o(1) algorithm can be obtained and as a consequence leads
to a (1 c)2 algorithm for submodular functions with bounded curvature [4]. The goal of this work
is to exploit the curvature property to obtain the optimal algorithm for optimization from samples.
2
A high-level overview of the techniques. The algorithm estimates the expected marginal contribution of each element to a random set. It then returns the (approximately) best set between the set of
elements with the highest estimates and a random set. The curvature property is used to bound the
differences between the marginal contribution of each element to: (1) a random set, (2) the set of
elements with highest (estimated) marginal contributions to a random set, and (3) the optimal set. A
key observation in the analysis is that if the difference between (1) and (3) is large, then a random set
has large value (in expectation).
To obtain our matching inapproximability result, we construct an instance where, after viewing
polynomially many samples, the elements of the optimal set cannot be distinguished from a much
larger set of elements that have high marginal contribution to a random set, but low marginal
contribution when combined with each other. The main challenge is constructing the optimal
elements such that they have lower marginal contribution to a random set than to the other optimal
elements. This requires carefully defining the way different types of elements interact with each other,
while maintaining the global properties of monotonicity, submodularity, and bounded curvature.
1.1
Related work
Submodular maximization. In the traditional value oracle model, an algorithm may adaptively
query polynomially many sets Si and obtain via a black-box their values f (Si ). It is well known
that in this model, the greedy algorithm obtains a 1 1/e approximation for a wide range of
constraints including cardinality constraints [23], and that no algorithm can do better [6]. Submodular
optimization is an essential tool for problems in machine learning and data mining such as sensor
placement [20, 12], information retrieval [28, 14], optimal tagging [24], influence maximization
[19, 13], information summarization [21, 22], and vision [17, 18].
Learning. A recent line of work focuses on learning submodular functions from samples [3, 8, 2,
10, 11, 1, 9]. The standard model to learn submodular functions is ?-PMAC learnability introduced
by Balcan and Harvey [3] which generalizes the well known PAC learnability framework from
Valiant [26]. Informally, a function is PAC or PMAC learnable if given polynomially samples, it is
possible to construct a function that is likely to mimic the function for which the samples are coming
form. Monotone submodular functions are ?-PMAC learnable from samples coming from a product
distribution for some constant ? and under some assumptions [3].
Curvature. In the value oracle model, the greedy algorithm is a (1 e c )/c approximation algorithm for cardinality constraints [5]. Recently, Sviridenko et al. [25] improved this approximation to
1 c/e with variants of the continuous greedy and local search algorithms. Submodular optimization
and curvature have also been studied for more general constraints [27, 15] and submodular minimization [16]. The curvature assumption has applications in problems such as maximum entropy
sampling [25], column-subset selection [25], and submodular welfare [27].
2
Optimization from samples
We precisely define the framework of optimization from samples. A sample (S, f (S)) of function
f (?) is a set and its value. As with the PMAC-learning framework, the samples (Si , f (Si )) are such
that the sets Si are drawn i.i.d. from a distribution D. As with the standard optimization framework,
the goal is to return a set S satisfying some constraint M ? 2N such that f (S) is an ?-approximation
to the optimal solution f (S ? ) with S ? 2 M.
A class of functions F is ?-optimizable from samples under constraint M and over distribution D
if for all functions f (?) 2 F there exists an algorithm which, given polynomially many samples
(Si , f (Si )), returns with high probability over the samples a set S 2 M such that
f (S)
? ? max f (T ).
T 2M
In the unconstrained case, a random set achieves a 1/4-approximation for general (not necessarily
monotone) submodular functions [7]. We focus on the constrained case and consider a simple
cardinality constraint M, i.e., M = {S : |S| ? k}. To avoid trivialities in the framework, it is
important to fix a distribution D. We consider the distribution D to be the uniform distribution over
all feasible sets, i.e., all sets of size at most k.
3
We are interested in functions that are both learnable and optimizable. It is already known that there
exists classes of functions, such as coverage and submodular, that are both learnable and optimizable
but not optimizable from samples for M and D defined above. This paper studies optimization from
samples under some additional assumption: curvature. We assume that the curvature c of the function
is known to the algorithm designer. In the appendix, we show an impossibility result for learning the
curvature of a function from samples.
3
An optimal algorithm
We design a (1 c)/(1+c c2 ) o(1)-optimization from samples algorithm for monotone submodular
functions with curvature c. In the next section, we show that this approximation ratio is tight. The
main contribution is improving over the (1 c)2 o(1) approximation algorithm from [4] to obtain
a tight bound on the approximation.
The algorithm. Algorithm 1 first estimates the expected marginal contribution of each element ei
to a uniformly random set of size k 1, which we denote by R for the remaining of this section.
These expected marginal contributions ER [fR (ei )] are
P estimated with v?i . The estimates v?i are the
differences between the average value avg(Sk,i ) := ( T 2Sk,i f (T ))/|Sk,i | of the collection Sk,i of
samples of size k containing ei and the average value of the collection Sk 1,i 1 of samples of size
k 1 not containing ei . We then wish to return the best set between the random set R and the set S
consisting of the k elements with the largest estimates v?i . Since we do not know the value of S, we
lower bound it with v?S using the curvature property. We estimate the expected value ER [f (R)] of R
with v?R , which is the average value of the collection Sk 1 of all samples of size k 1. Finally, we
compare the values of S and R using v?S and v?R to return the best of these two sets.
Algorithm 1 A tight (1 c)/(1 + c c2 ) o(1)-optimization from samples algorithm for monotone
submodular functions with curvature c
Input: S = {Si : (Si , f (Si )) is a sample}
1: v?i
avg(Sk,i ) avg(S
P k 1,i 1 )
2: S
argmax|T |=k i2T v?i
P
3: v?S
(1 c) ei 2S v?i
a lower bound on the value of f (S)
4: v?R
avg(Sk 1 )
an estimate of the value of a random set R
5: if v?S
v?R then
6:
return S
7: else
8:
return R
9: end if
The analysis. Without loss of generality, let S = {e1 , . . . , ek } be the set defined in Line 2 of the
algorithm and define Si to be the first i elements in S, i.e., Si := {e1 , . . . , ei }. Similarly, for the
optimal solution S ? , we have S ? = {e?1 , . . . , e?k } and Si? := {e?1 , . . . , e?i }. We abuse notation and
denote by f (R) and fR (e) the expected values ER [f (R)] and ER [fR (e)] where the randomization is
over the random set R of size k 1.
P
At a high level, the curvature property is used to bound the loss from f (S) to i?k fR (ei ) and
P
P
P
from i?k fR (e?i ) to f (S ? ). By the algorithm, i?k fR (ei ) is greater than i?k fR (e?i ). When
P
bounding the loss from i?k fR (e?i ) to f (S ? ), a key observation is that if this loss is large, then it
must be the case that R has a high expected value. This observation is formalized in our analysis
by bounding this loss in terms of f (R) and motivates Algorithm 1 returning the best of R and S.
Lemma 1 is the main part of the analysis and gives an approximation for S. The approximation
guarantee for Algorithm 1 (formalized as Theorem 1) follows by finding the worst-case ratios of
f (R) and f (S).
Lemma 1. Let S be the set defined in Algorithm 1 and f (?) be a monotone submodular function with
curvature c, then
?
?
?
?
f (R)
f (S) (1 o(1))?
vS
(1 c) 1 c ?
o(1)
f (S ? ).
f (S ? )
4
Proof. First, observe that
f (S) =
X
fSi 1 (ei )
(1
c)
i?k
X
i?k
f (ei )
(1
c)
X
fR (ei )
i?k
where the first inequality is by curvature and the second is by monotonicity. We now claim that w.h.p.
and with a sufficiently large polynomial number of samples the estimates of the marginal contribution
of an element are precise,
f (S ? )
f (S ? )
v?i fR (ei )
2
n
n2
P
and defer the proof to the appendix. Thus f (S) (1 c) i?k v?i f (S ? )/n v?S f (S ? )/n.
Next, by the definition of S in the algorithm, we get
X
X
X
v?S
f (S ? )
=
v?i
v?i?
fR (e?i )
.
1 c
n
i?k
i?k
i?k
P
It is possible to obtain a 1 c loss between i?k fR (e?i ) and f (S ? ) with a similar argument as in
the first part. The key idea to improve this loss is to use the curvature property on the elements in R
instead of on the elementsPe?i 2 S ? . By curvature, we have that fS ? (R) (1 c)f (R). We now
wish to relate fS ? (R) and i?k fR (e?i ). Note that f (S ? ) + fS ? (R) = f (R [ S ? ) = f (R) + fR (S ? )
P
by the definition of marginal contribution and i?k fR (e?i ) fR (S ? ) by submodularity. We get
P
?
f (S ? ) + fS ? (R) f (R) by combining the previous equation and inequality. By
i?k fR (ei )
the previous curvature observation, we conclude that
?
?
X
f (R)
fR (e?i ) f (S ? ) + (1 c)f (R) f (R) = 1 c ?
f (S ? ).
f (S ? )
fR (ei ) +
i?k
Pk
Combining Lemma 1 and the fact that we obtain value at least max{f (R), (1 c) i=1 v?i }, we
obtain the main result of this section.
Theorem 1. Let f (?) be a monotone submodular function with curvature c. Then Algorithm 1 is a
(1 c)/(1 + c c2 ) o(1) optimization from samples algorithm.
Proof. In the appendix, we show that the estimate v?R of f (R) is precise, the estimate is such that
f (R) + f (S ? )/n2
v?R
f (R) f (S ? )/n2 . In addition, by the first inequality in Lemma 1,
f (S) (1 o(1))?
vS . So by the algorithm and the second inequality in Lemma 1, the approximation
obtained by the set returned is at least
?
?
?
?
f (R)
v?S
f (R)
f (R)
(1 o(1)) ? max
,
(1
o(1))
?
max
,
(1
c)
1
c
?
.
f (S ? ) f (S ? )
f (S ? )
f (S ? )
Let x := f (R)/f (S ? ), the best of f (R)/f (S ? ) and (1 c) (1 c ? f (R)/f (S ? )) o(1) is minimized
when x = (1 c)(1 cx), or when x = (1 c)/(1 + c c2 ). Thus, the approximation obtained is
at least (1 c)/(1 + c c2 ) o(1).
4
Hardness
We show that the approximation obtained by Algorithm 1 is tight. For every c < 1, there exists
monotone submodular functions that cannot be (1 c)/(1 + c c2 )-optimized from samples. This
impossibility result is information theoretic, we show that with high probability the samples do not
contain the right information to obtain a better approximation.
Technical overview. To obtain a tight bound, all the losses from Algorithm 1 must be tight. We
?
need
Pto obtain ?a 1 cf (R)/f (S ) ?gap between the contribution of optimal elements to a random
set i?k fR (ei ) and the value f (S ). This gap implies that as a set grows with additional random
elements, the contribution of optimal elements must decrease. The main difficulty is in obtaining this
decrease while maintaining random sets of small value, submodularity, and the curvature.
5
loss: 1/(1+c-c2)
)
s ,0
g(
function
value
?k
(s,s P
?2
n
log
)
loss: 1-c
g
b(s)
0
0
log n
set size s
Figure 1: The symmetric functions g(sG , sP ) and b(sB ).
The ground set of elements is partitioned into three parts: the good elements G, the bad elements B,
and the poor elements P . In relation to the analysis of the algorithm, the optimal solution S ? is G,
the set S consists mostly of elements in B, and a random set consists mostly of elements in P . The
values of the good, bad, and poor elements are given by the good, bad, and poor functions g(?), b(?),
and p(?) to be later defined and the functions f (?) we construct for the impossibility result are:
f G (S) := g(S \ G, S \ P ) + b(S \ B) + p(S \ P ).
The value of the good function is also dependent on the poor elements to obtain the decrease
in marginal contribution of good elements mentioned above. The proof of the hardness result
(Theorem 2) starts with concentration bounds in Lemma 2 to show that w.h.p. every sample contains
a small number of good and bad elements and a large number of poor elements. Using these
concentration bounds, Lemma 3 gives two conditions on the functions g(?), b(?), and p(?) to obtain
the desired result. Informally, the first condition is that good and bad elements cannot be distinguished
while the second is that G has larger value than a set with a small number of good elements. We
then construct these functions and show that they satisfy the two conditions in Lemma 4. Finally,
Lemma 5 shows that f (?) is monotone submodular with curvature c.
Theorem 2. For every c < 1, there exists a hypothesis class of monotone submodular functions with
curvature c that is not (1 c)/(1 + c c2 ) + o(1) optimizable from samples.
The remaining of this section is devoted to the proof of Theorem 2. Let ? > 0 be some small constant.
The set of poor elements P is fixed and has size n n2/3 ? . The good elements G are then a
uniformly random subset of P C of size k := n1/3 , the remaining elements B are the bad elements.
The following concentration bound is used to show that elements in G and B cannot be distinguished.
The proof is deferred to the appendix.
Lemma 2. All samples S are such that |S \ (G [ B)| ? log n and |S \ P | k 2 log n w.h.p..
We now give two conditions on the good, bad, and poor functions to obtain an impossibility result
based on the above concentration bounds. The first condition ensures that good and bad elements
cannot be distinguished. The second condition quantifies the gap between the value of k good
elements and a set with a small number of good elements. We denote by sG the number of good
elements in a set S, i.e., sG := |S \ G| and define similarly sB and sP . The good, bad, and, poor
functions are symmetric, meaning they each have equal value over sets of equal size, and we abuse
the notation with g(sG , sP ) = g(S \ G, S \ P ) and similarly for b(sB ) and p(sP ). Figure 1 is a
simplified illustration of these two conditions.
Lemma 3. Consider sets S and S 0 , and assume g(?), b(?), and p(?) are such that
1. g(sG , sP ) + b(sB ) = g(s0G , s0P ) + b(s0B ) if
? sG + sB = s0G + s0B ? log n and sP , s0P
k
2 log n,
2. g(sG , sP ) + b(sB ) + p(sP ) < ? ? g(k, 0) if
? sG ? n? and sG + sB + sP ? k
then the hypothesis class of functions F = {f G (?) : G ? P C , |G| = k} is not ?-optimizable from
samples.
6
Proof. By Lemma 2, for any two samples S and S 0 , sG + sB ? log n, s0G + s0B ? log n and
sP , s0P
k 2 log n with high probability. If sG + sB = s0G + s0B , then by the first assumption,
g(sG , sP ) + b(sB ) = g(s0G , s0P ) + b(s0B ). Recall that G is a uniformly random subset of the fixed
set P C and that B consists of the remaining elements in P C . Thus, w.h.p., the value f G (S) of all
samples S is independent of which random subset G is. In other words, no algorithm can distinguish
good elements from bad elements with polynomially many samples. Let T be the set returned by the
algorithm. Since any decision of the algorithm is independent from G, the expected number of good
elements in T is tG ? k ? |G|/|G [ B| = k 2 /n2/3 ? = n? . Thus,
?
?
EG f G (T ) = g(tG , tP ) + b(tB ) + p(tP ) ? g(n? , tP ) + b(tB ) + p(tP ) < ? ? g(k, 0)
where the first inequality is by the submodularity and monotonicity properties of the good elements
G for f G (?) and the second inequality is by the second condition of the lemma. By expectations, the
set S returned by the algorithm is therefore not an ?-approximation to the solution G for at least one
function f G (?) 2 F and F is not ?-optimizable from samples.
Constructing g(?), b(?), p(?). The goal is now to construct g(?), b(?) and p(?) that satisfy the above
conditions. We start with the good and bad function:
?
?
?
?
(
sG ? 1
1 1+c1 c2 ? sP ? k 21log n
if sp ? k 2 log n
g(sG , sP ) =
1
sG ? 1+c c2
otherwise
(
sB ? 1+c1 c2
if sB ? log n
b(sB ) =
1 c
1
(sB log n) ? 1+c
+
log
n
?
otherwise
c2
1+c c2
These functions exactly exhibit the losses from the analysis of the algorithm in the case where
the algorithm returns bad elements. As illustrated in Figure 1, there is a 1 c loss between the
contribution 1/(1 + c c2 ) of a bad element to a random set and its contribution (1 c)/(1 + c c2 )
to a set with at least log n bad elements. There is also a 1/(1 + c c2 ) loss between the contribution
1 of a good element to a set with no poor elements and its contribution 1/(1 + c c2 ) to a random
set. We add a function p(sP ) to f G (?) so that it is monotone increasing when adding poor elements.
(
1 c
k
sp ? 1+c
if sP ? k 2 log n
c2 ? k 2 log n
?
p(sP ) = ?
(1 c)2
1 c
k
(sp (k 2 log n)) 1+c c2 + (k 2 log n) 1+c c2 k 2 log n otherwise
The next two lemmas show that theses function satisfy Lemma 3 and that f G (?) is monotone
submodular with curvature c, which concludes the proof of Theorem 2.
Lemma 4. The functions g(?), b(?), and p(?) defined above satisfy the conditions of Lemma 3 with
? = (1 c)/(1 + c c2 ) + o(1).
Proof. We start with the first condition. Assume sG + sB = s0G + s0B ? log n and sP , s0P
k 2 log n. Then,
1
1
g(sG , sP ) + b(sB ) = (sG + sB ) ?
= (s0G + s0B ) ?
= g(s0G , s0P ) + b(s0B ).
1 + c c2
1 + c c2
For the second condition, assume sG ? n? and sG + sB + sP ? k. It is without loss to assume that
sB + sP k n? , then
?
?
1 c
1 c
G
f (S) ? (1 + o(1)) ? (sB + sP ) ?
?k?
+ o(1) .
1 + c c2
1 + c c2
We conclude by noting that g(k, 0) = k.
Lemma 5. The function f G (?) is a monotone submodular function with curvature c.
Proof. We show that the marginal contributions are positive (monotonicity), decreasing (submodularity), but not by more than a 1 c factor (curvature), i.e., that fS (e) fT (e) (1 c)fS (e) 0 for
all S ? T and e 62 T . Let e be a good element, then
?
?
?
(?
1
1 1+c1 c2 ? sP ? k 21log n
if sp ? k 2 log n
G
fS (e) =
1
otherwise.
1+c c2
7
Since sP ? tP for S ? T , we obtain fS (e)
fT (e)
0. It is also easy to see that we get
fT (e) 1+c1 c2
(1 c) (1 c)fS (e). For bad elements,
(
1
if sB ? log n
G
c2
fS (e) = 1+c
1 c
otherwise.
2
1+c c
Thus, fS (e)
fSG (e) =
Since sG ? k,
c)fS (e) 0 for all S ? T and e 62 T . Finally, for poor elements,
?
1
1 c
k
? sG ? k 21log n + 1+c
if sP ? k 2 log n
2
1+c c
c2 ? k 2 log n
fT (e) (1
8 ?
<
1
c)2
k
1+c c2 k 2 log n
: (1
1 c
k
?
2
1 + c c k 2 log n
Consider S ? T , then sG ? tG , and fS (e)
5
otherwise.
fSG (e)
fT (e)
(1 c)2
k
.
2
1 + c c k 2 log n
(1 c)fS (e) 0.
Experiments
We perform simulations on simple synthetic functions. These experiments are
meant to complement the theoretical analysis by conveying some interpretations of
the bounds obtained. The synthetic functions are a simplification of the construction for the impossibility result. The motivation for these functions is to obtain hard instances that are challenging for the algorithm.
More precisely,
the function considered is
?
|S \ (G [ B)|
if |S \ B| ? 10
f (S) =
|S \ G| + |S \ B| ? (1 c) 10c
otherwise, where G and B are fixed sets of size
102 and 103 respectively. The ground set N
Figure 2: The objective f (?) as a function of the
contains 105 elements. It is easy to verify that
cardinality constraint k.
f (?) has curvature c. This function is hard to
optimize since the elements in G and B cannot
be distinguished from samples.
We consider several benchmarks. The first is
the value obtained by the learn then optimize
approach where we first learn the function and
then optimize the learned function. Equivalently, this is a random set of size k, since
the learned function is a constant with the algorithm from [3]. We also compare our algorithm
to the value of the best sample observed. The
solution returned by the greedy algorithm is an
upper bound and is a solution obtainable only
in the full information setting. The results are
summarized in Figure 2 and 3. In Figure 2, the
value of greedy, best sample, and random set
do not change for different curvatures c since
w.h.p. they pick at most 10 elements from Figure 3: The approximation as a function of the
B. For curvature c = 0, when the function curvature 1 c when k = 100.
is modular, our algorithm performs as well as
the greedy algorithm, which is optimal. As the
curvature increases, the solution obtained by our algorithm worsens, but still significantly outperforms
the best sample and a random set. The power of our algorithm is that it is capable to distinguish
elements in G [ B from the other elements.
8
References
[1] Balcan, M. (2015). Learning submodular functions with applications to multi-agent systems. In AAMAS.
[2] Balcan, M., Constantin, F., Iwata, S., and Wang, L. (2012). Learning valuation functions. In COLT.
[3] Balcan, M. and Harvey, N. J. A. (2011). Learning submodular functions. In STOC.
[4] Balkanski, E., Rubinstein, A., and Singer, Y. (2015). The limitations of optimization from samples. arXiv
preprint arXiv:1512.06238.
[5] Conforti, M. and Cornu?jols, G. (1984). Submodular set functions, matroids and the greedy algorithm: tight
worst-case bounds and some generalizations of the rado-edmonds theorem. Discrete applied mathematics.
[6] Feige, U. (1998). A threshold of ln n for approximating set cover. JACM.
[7] Feige, U., Mirrokni, V. S., and Vondrak, J. (2011). Maximizing non-monotone submodular functions. SIAM
Journal on Computing.
[8] Feldman, V. and Kothari, P. (2014). Learning coverage functions and private release of marginals. In COLT.
[9] Feldman, V., Kothari, P., and Vondr?k, J. (2013). Representation, approximation and learning of submodular
functions using low-rank decision trees. In COLT.
[10] Feldman, V. and Vondr?k, J. (2013). Optimal bounds on approximation of submodular and XOS functions
by juntas. In FOCS.
[11] Feldman, V. and Vondr?k, J. (2015). Tight bounds on low-degree spectral concentration of submodular and
XOS functions. CoRR.
[12] Golovin, D., Faulkner, M., and Krause, A. (2010). Online distributed sensor selection. In IPSN.
[13] Gomez Rodriguez, M., Leskovec, J., and Krause, A. (2010). Inferring networks of diffusion and influence.
In SIGKDD.
[14] Hang, L. (2011). A short introduction to learning to rank. IEICE.
[15] Iyer, R. K. and Bilmes, J. A. (2013). Submodular optimization with submodular cover and submodular
knapsack constraints. In NIPS.
[16] Iyer, R. K., Jegelka, S., and Bilmes, J. A. (2013). Curvature and optimal algorithms for learning and
minimizing submodular functions. In NIPS.
[17] Jegelka, S. and Bilmes, J. (2011a). Submodularity beyond submodular energies: coupling edges in graph
cuts. In CVPR.
[18] Jegelka, S. and Bilmes, J. A. (2011b). Approximation bounds for inference using cooperative cuts. In
ICML.
[19] Kempe, D., Kleinberg, J., and Tardos, ?. (2003). Maximizing the spread of influence through a social
network. In SIGKDD.
[20] Leskovec, J., Krause, A., Guestrin, C., Faloutsos, C., VanBriesen, J., and Glance, N. (2007). Cost-effective
outbreak detection in networks. In SIGKDD.
[21] Lin, H. and Bilmes, J. (2011a). A class of submodular functions for document summarization. In NAACL
HLT.
[22] Lin, H. and Bilmes, J. A. (2011b). Optimal selection of limited vocabulary speech corpora. In INTERSPEECH.
[23] Nemhauser, G. L., Wolsey, L. A., and Fisher, M. L. (1978). An analysis of approximations for maximizing
submodular set functions ii. Math. Programming Study 8.
[24] Rosenfeld, N. and Globerson, A. (2016). Optimal Tagging with Markov Chain Optimization. arXiv
preprint arXiv:1605.04719.
[25] Sviridenko, M., Vondr?k, J., and Ward, J. (2015). Optimal approximation for submodular and supermodular
optimization with bounded curvature. In SODA.
[26] Valiant, L. G. (1984). A Theory of the Learnable. Commun. ACM.
[27] Vondr?k, J. (2010). Submodularity and curvature: the optimal algorithm. RIMS.
[28] Yue, Y. and Joachims, T. (2008). Predicting diverse subsets using structural svms. In ICML.
9
| 6447 |@word worsens:1 private:1 polynomial:1 simulation:1 pick:1 contains:2 score:2 document:6 outperforms:1 si:15 must:3 v:2 generative:3 greedy:8 fewer:1 short:1 junta:1 math:1 node:1 unbounded:2 c2:34 focs:1 consists:5 manner:1 tagging:4 expected:7 hardness:3 behavior:3 multi:1 decreasing:1 cardinality:6 increasing:1 spain:1 bounded:6 moreover:1 underlying:2 notation:2 what:2 pto:1 finding:1 guarantee:11 berkeley:2 every:4 exactly:1 returning:1 positive:1 local:1 consequence:1 initiated:1 approximately:1 abuse:2 black:1 studied:1 challenging:1 limited:1 range:1 statistically:2 globerson:1 practice:2 s0b:8 cascade:2 significantly:1 matching:1 word:1 get:3 cannot:7 close:1 selection:3 impossible:1 influence:7 optimize:6 map:1 maximizing:4 formalized:2 traditionally:1 i2t:1 tardos:1 construction:1 suppose:1 heavily:1 user:3 play:1 programming:1 hypothesis:2 harvard:4 element:61 satisfying:1 particularly:1 cut:2 cooperative:1 observed:4 ft:6 role:1 preprint:2 wang:1 capture:1 worst:2 ensures:1 decrease:3 highest:2 observes:1 mentioned:1 rado:1 tight:9 eric:1 effective:1 query:6 rubinstein:2 modular:4 widely:1 larger:2 cvpr:1 otherwise:7 ward:1 rosenfeld:1 online:2 coming:2 product:1 fr:20 relevant:2 combining:2 achieve:1 description:1 optimum:3 sea:1 coupling:1 coverage:3 implies:1 submodularity:8 ipsn:1 xos:2 viewing:1 fix:1 generalization:1 randomization:1 sufficiently:1 considered:1 ground:3 welfare:1 predict:2 claim:1 achieves:1 largest:1 tool:1 minimization:1 sensor:2 aim:2 rather:2 avoid:1 release:1 focus:2 joachim:1 rank:3 impossibility:5 sigkdd:3 inference:1 dependent:1 sb:21 typically:1 diminishing:2 relation:1 interested:1 s0p:6 colt:3 priori:1 constrained:1 special:1 kempe:1 marginal:14 equal:2 construct:6 sampling:1 icml:2 mimic:1 minimized:1 cornu:1 individual:3 argmax:1 consisting:1 n1:1 detection:1 interest:1 mining:2 deferred:1 navigation:1 pmac:5 devoted:1 chain:1 amenable:1 constantin:1 edge:1 capable:1 tree:1 desired:1 theoretical:3 leskovec:2 instance:4 column:1 cover:2 tp:5 maximization:5 tg:3 cost:1 subset:7 uniform:3 learnability:4 eec:1 synthetic:3 combined:1 adaptively:1 siam:1 influencers:1 picking:1 thesis:1 balkanski:2 containing:2 ek:1 return:10 summarized:1 satisfy:4 ranking:2 idealized:1 later:1 observing:1 traffic:2 start:3 yaron:2 defer:1 contribution:20 who:2 identify:1 conveying:1 bilmes:6 influenced:1 hlt:1 definition:2 energy:1 proof:10 sampled:1 recall:1 formalize:1 obtainable:1 rim:1 carefully:1 supermodular:1 harness:1 improved:1 box:1 generality:1 furthermore:1 hand:1 ei:15 rodriguez:1 glance:1 behaved:1 grows:1 ieice:1 naacl:1 concept:1 contain:1 verify:1 symmetric:2 illustrated:1 eg:1 interspeech:1 theoretic:2 performs:1 balcan:4 meaning:1 recently:2 overview:2 exponentially:1 interpretation:2 marginals:1 feldman:4 unconstrained:1 mathematics:1 similarly:3 submodular:52 fsi:1 access:3 add:1 curvature:48 recent:2 optimizing:1 commun:1 harvey:2 inequality:6 scoring:2 guestrin:1 additional:2 greater:1 maximize:2 ii:1 full:1 technical:1 retrieval:2 lin:2 e1:3 prediction:1 variant:1 vision:1 expectation:2 arxiv:4 c1:4 addition:1 krause:3 else:1 crucial:1 yue:1 structural:1 noting:1 easy:2 concerned:1 faulkner:1 idea:1 aviad:2 triviality:1 f:17 returned:4 speech:1 generally:1 governs:1 informally:3 svms:1 exist:1 designer:3 estimated:2 diverse:1 edmonds:1 discrete:2 key:3 threshold:1 drawn:5 diffusion:1 graph:1 monotone:20 concreteness:1 soda:1 decide:1 decision:7 appendix:4 bound:17 distinguish:2 simplification:1 gomez:1 oracle:4 occur:1 placement:1 constraint:11 precisely:2 sviridenko:2 tag:3 generates:1 vondrak:1 kleinberg:1 argument:1 poor:11 feige:2 vanbriesen:1 partitioned:1 appealing:1 encapsulates:1 outbreak:1 ln:1 equation:1 turn:1 singer:2 know:2 end:1 optimizable:8 generalizes:1 apply:3 observe:2 spectral:1 distinguished:5 faloutsos:1 jols:1 knapsack:1 remaining:4 include:1 cf:1 maintaining:3 exploit:1 approximating:1 objective:1 already:1 quantity:1 concentration:5 mirrokni:1 traditional:1 exhibit:2 nemhauser:1 valuation:1 provable:2 assuming:1 conforti:1 illustration:1 ratio:3 minimizing:1 equivalently:1 mostly:2 stoc:1 relate:2 design:1 motivates:1 summarization:2 perform:2 upper:1 observation:8 kothari:2 markov:1 benchmark:1 defining:1 precise:2 introduced:1 complement:1 pair:3 optimized:2 california:1 learned:9 barcelona:1 nip:3 beyond:1 below:1 challenge:1 tb:2 max:5 including:1 power:3 natural:1 difficulty:1 predicting:1 improve:1 numerous:3 concludes:1 sg:23 loss:14 e2s:1 limitation:1 wolsey:1 querying:1 agent:1 degree:1 jegelka:3 wide:1 matroids:1 distributed:1 vocabulary:1 forward:1 collection:3 avg:4 simplified:1 far:2 polynomially:9 social:1 approximate:1 obtains:3 vondr:5 hang:1 monotonicity:4 global:1 incoming:2 corpus:1 conclude:2 alternatively:1 continuous:1 search:1 quantifies:1 sk:8 learn:5 golovin:1 obtaining:1 improving:1 interact:1 necessarily:1 constructing:2 sp:28 pk:1 spread:2 main:6 bounding:2 motivation:1 n2:5 succinct:1 aamas:1 convey:1 en:1 inferring:1 wish:3 theorem:7 bad:15 navigate:1 pac:2 er:4 learnable:8 explored:1 exists:5 essential:1 adding:1 valiant:2 corr:1 iyer:2 gap:3 entropy:1 cx:1 likely:1 jacm:1 inapproximability:1 iwata:1 acm:1 goal:8 ericbalkanski:1 fisher:1 feasible:5 content:2 hard:4 change:1 uniformly:3 lemma:18 total:1 select:1 meant:1 relevance:2 |
6,022 | 6,448 | Combining Fully Convolutional and Recurrent
Neural Networks for 3D Biomedical Image
Segmentation
Jianxu Chen
University of Notre Dame
jchen16@nd.edu
Yizhe Zhang
University of Notre Dame
yzhang29@nd.edu
Lin Yang
University of Notre Dame
lyang5@nd.edu
Mark Alber
University of Notre Dame
malber@nd.edu
Danny Z. Chen
University of Notre Dame
dchen@nd.edu
Abstract
Segmentation of 3D images is a fundamental problem in biomedical image analysis.
Deep learning (DL) approaches have achieved state-of-the-art segmentation performance. To exploit the 3D contexts using neural networks, known DL segmentation
methods, including 3D convolution, 2D convolution on planes orthogonal to 2D
image slices, and LSTM in multiple directions, all suffer incompatibility with the
highly anisotropic dimensions in common 3D biomedical images. In this paper,
we propose a new DL framework for 3D image segmentation, based on a combination of a fully convolutional network (FCN) and a recurrent neural network
(RNN), which are responsible for exploiting the intra-slice and inter-slice contexts,
respectively. To our best knowledge, this is the first DL framework for 3D image
segmentation that explicitly leverages 3D image anisotropism. Evaluating using a
dataset from the ISBI Neuronal Structure Segmentation Challenge and in-house
image stacks for 3D fungus segmentation, our approach achieves promising results
comparing to the known DL-based 3D segmentation approaches.
1
Introduction
In biomedical image analysis, a fundamental problem is the segmentation of 3D images, to identify
target 3D objects such as neuronal structures [1] and knee cartilage [15]. In biomedical imaging, 3D
images often consist of highly anisotropic dimensions [11], that is, the scale of each voxel in depth
(the z-axis) can be much larger (e.g., 5 ? 10 times) than that in the xy plane.
On various biomedical image segmentation tasks, deep learning (DL) methods have achieved tremendous success in terms of accuracy (outperforming classic methods by a large margin [4]) and
generality (mostly application-independent [16]). For 3D segmentation, known DL schemes can be
broadly classified into four categories. (I) 2D fully convolutional networks (FCN), such as U-Net
[16] and DCAN [2], can be applied to each 2D image slice, and 3D segmentation is then generated
by concatenating the 2D results. (II) 3D convolutions can be employed to replace 2D convolutions
[10], or combined with 2D convolutions into a hybrid network [11]. (III) Tri-planar schemes (e.g.,
[15]) apply three 2D convolutional networks based on orthogonal planes (i.e., the xy, yz, and xz
planes) to perform voxel classification. (IV) 3D segmentation can also be conducted by recurrent
neural networks (RNN). A most representative RNN based scheme is Pyramid-LSTM [18], which
uses six generalized long short term memory networks to exploit the 3D context.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: An overview of our DL framework for 3D segmentation. There are two key components in
the architecture: kU-Net and BDC-LSTM. kU-Net is a type of FCN and is applied to 2D slices to
exploit intra-slice contexts. BDC-LSTM, a generalized LSTM network, is applied to a sequence of
2D feature maps, from 2D slice z ? ? to 2D slice z + ?, extracted by kU-Nets, to extract hierarchical
features from the 3D contexts. Finally, a softmax function (the green arrows) is applied to the result
of each slice in order to build the segmentation probability map.
There are mainly three issues to the known DL-based 3D segmentation methods. First, simply linking
2D segmentations into 3D cannot leverage the spatial correlation along the z-direction. Second,
incorporating 3D convolutions may incur extremely high computation costs (e.g., high memory
consumption and long training time [10]). Third, both 3D convolution and other circumventive
solutions (to reduce intensive computation of 3D convolution), like tri-planar schemes or PyramidLSTM, perform 2D convolutions with isotropic kernel on anisotropic 3D images. This could be
problematic, especially for images with substantially lower resolution in depth (the z-axis). For
instance, both the tri-planar schemes and Pyramid-LSTM perform 2D convolutions on the xz and
yz planes. For two orthogonal one-voxel wide lines in the xz plane, one along the z-direction and
the other along the x-direction, they may correspond to two structures at very different scales, and
consequently may correspond to different types of objects ? or even may not both correspond
to objects of interest. But, 2D convolutions on the xz plane with isotropic kernel are not able to
differentiate these two lines. On the other hand, 3D objects of a same type, if rotated in 3D, may have
very different appearances in the xz or yz plane. This fact makes the features extracted by such 2D
isotropic convolutions in the xz or yz plane suffer poor generality (e.g., may cause overfitting).
In common practice, a 3D biomedical image is often represented as a sequence of 2D slices (called
a z-stack). Recurrent neural networks, especially LSTM [8], are an effective model to process
sequential data [14, 17]. Inspired by these facts, we propose a new framework combining two DL
components: a fully convolutional network (FCN) to extract intra-slice contexts, and a recurrent
neural network (RNN) to extract inter-slice contexts. Our framework is based on the following ideas.
Our FCN component employs a new deep architecture for 2D feature extraction. It aims to efficiently
compress the intra-slice information into hierarchical features. Comparing to known FCN for 2D
biomedical imaging (e.g., U-Net [16]), our new FCN is considerably more effective in dealing with
objects of very different scales by simulating human behaviors in perceiving multi-scale information.
We introduce a generalized RNN to exploit 3D contexts, which essentially applies a series of 2D
convolutions on the xy plane in a recurrent fashion to interpret 3D contexts while propagating
contextual information in the z-direction. Our key idea is to hierarchically assemble intra-slice
contexts into 3D contexts by leveraging the inter-slice correlations. The insight is that our RNN can
distill 3D contexts in the same spirit as the 2D convolutional neural network (CNN) extracting a
hierarchy of contexts from a 2D image. Comparing to known RNN models for 3D segmentation,
such as Pyramid-LSTM [18], our RNN model is free of the problematic isotropic convolutions on
anisotropic images, and can exploit 3D contexts more efficiently by combining with FCN.
The essential difference between our new DL framework and the known DL-based 3D segmentation
approaches is that we explicitly leverage the anisotropism of 3D images and efficiently construct a
hierarchy of discriminative features from 3D contexts by performing systematic 2D operations. Our
framework can serve as a new paradigm of migrating 2D DL architectures (e.g., CNN) to effectively
exploit 3D contexts and solve 3D image segmentation problems.
2
Methodology
A schematic view of our DL framework is given in Fig. 1. This framework is a combination of two
key components: an FCN (called kU-Net) and an RNN (called BDC-LSTM), to exploit intra-slice
2
Figure 2: Illustrating four different ways to organize k submodule U-Nets in kU-Net (here k = 2).
U-Net-2 works in a coarser scale (downsampled once from the original image), while U-Net-1 works
in a finer scale (directly cropped from the original image). kU-Net propagates high level information
extracted by U-Net-2 to U-Net-1. (A) U-Net-1 fuses the output of U-Net-2 in the downsampling
stream. (B) U-Net-1 fuses the output of U-Net-2 in the upsampling stream. (C) U-Net-1 fuses the
intermediate result of U-Net-2 in the most abstract layer. (D) U-Net-1 takes every piece of information
from U-Net-2 in the commensurate layers. Architecture (A) is finally adopted for kU-Net.
and inter-slice contexts, respectively. Section 2.1 presents the kU-Net, and Section 2.2 introduces the
derivation of the BDC-LSTM. We then show how to combine these two components in the framework
to conduct 3D segmentation. Finally, we discuss the training strategy.
2.1
The FCN Component: kU-Net
The FCN component aims to construct a feature map for each 2D slice, from which object-relevant
information (e.g., texture, shapes) will be extracted and object-irrelevant information (e.g., uneven
illumination, imaging contrast) will be discarded. By doing so, the next RNN component can
concentrate on the inter-slice context.
A key challenge to the FCN component is the multi-scale issue. Namely, objects in biomedical images,
specifically in 2D slices, can have very different scales and shapes. But, the common FCN [13]
and other known variants for segmenting biomedical images (e.g., U-Net [16]) work on a fixed-size
perception field (e.g., a 500 ? 500 region in the whole 2D slice). When objects are of larger scale
than the pre-defined perception field size, it can be troublesome for such FCN methods to capture the
high level context (e.g., the overall shapes). In the literature, a multi-stream FCN was proposed in
ProNet [19] to address this multi-scale issue in natural scene images. In ProNet, the same image is
resized to different scales and fed in parallel to a shared FCN with the same parameters. However,
the mechanism of shared parameters may make it not suitable for biomedical images, because objects
of different scales may have very different appearances and require different FCNs to process.
We propose a new FCN architecture to simulate how human experts perceive multi-scale information,
in which multiple submodule FCNs are employed to work on different image scales systematically.
Here, we use U-Net [16] as the submodule FCN and call the new architecture kU-Net. U-Net [16] is
chosen because it is a well-known FCN achieving huge success in biomedical image segmentation.
U-Net [16] consists of four downsampling steps followed by four upsampling steps. Skip-layer
3
connections exist between each downsampled feature map and the commensurate upsampled feature
map. We refer to [16] for the detailed structure of U-Net.
We observed that, when human experts label the ground truth, they tend to first zoom out the image
to figure out where are the target objects and then zoom in to label the accurate boundaries of
those targets. There are two critical mechanisms in kU-Net to simulate such human behaviors. (1)
kU-Net employs a sequence of submodule FCNs to extract information at different scales sequentially
(from the coarsest scale to the finest scale). (2) The information extracted by the submodule FCN
responsible for a coarser scale will be propagated to the subsequent submodule FCN to assist the
feature extraction in a finer scale.
First, we create different scales of an original input 2D image by a series of connections of k ? 1
max-pooling layers. Let It be the image of scale t (t = 1, . . . , k), i.e., the result after t ? 1 maxpooling layers (I1 is the original image). Each pixel in It corresponds to 2t?1 pixels in the original
image. Then, we use U-Net-t (t = 1, . . . , k), i.e., the t-th submodule, to process It . We keep the
input window size the same across all U-Nets by using crop layers. Intuitively, U-Net-1 to U-Net-k
all have the same input size, while U-Net-1 views the smallest region with the highest resolution and
U-Net-k views the largest region with the lowest resolution. In other words, for any 1 ? t1 < t2 ? k,
U-Net-t2 is responsible for a larger image scale than U-Net-t1 .
Second, we need to propagate the higher level information extracted by U-Net-t (2 ? t ? k) to
the next submodule, i.e., U-Net-(t ? 1), so that clues from a coarser scale can assist the work in
a finer scale. A natural strategy is to copy the result from U-Net-t to the commensurate layer in
U-Net-(t ? 1). As shown in Fig. 2, there are four typical ways to achieve this: (A) U-Net-(t ? 1) only
uses the final result from U-Net-t and uses it at the start; (B) U-Net-(t ? 1) only uses the final result
from U-Net-t and uses it at the end; (C) U-Net-(t ? 1) only uses the most abstract information from
U-Net-t; (D) U-Net-(t ? 1) uses every piece of information from U-Net-t. Based on our trial studies,
type (A) and type (D) achieved the best performance. Since type (A) has fewer parameters than (D),
we chose type (A) as our final architecture to organize the sequence of submodule FCNs.
From a different perspective, each submodule U-Net can be viewed as a ?super layer". Therefore,
the kU-Net is a ?deep? deep learning model. Because the parameter k exponentially increases the
input window size of the network, a small k is sufficient to handle many biomedical images (we use
k = 2 in our experiments). Appended with a 1?1 convolution (to convert the number of channels in
the feature map) and a softmax layer, the kU-Net can be used for 2D segmentation problems. We
will show (see Table 1) that kU-Net (i.e., a sequence of collaborative U-Nets) can achieve better
performance than a single U-Net in terms of segmentation accuracy.
2.2
The RNN Component: BDC-LSTM
In this section, we first review the classic LSTM network [8], and the generalized convolutional
LSTM [14, 17, 18] (denoted by CLSTM). Next, we describe how our RNN component, called
BDC-LSTM, is extended from CLSTM. Finally, we propose a deep architecture for BDC-LSTM,
and discuss its advantages over other variants.
LSTM and CLSTM: RNN (e.g., LSTM) is a neural network that maintains a self-connected internal
status acting as a ?memory". The ability to ?remember? what has been seen allows RNN to attain
exceptional performance in processing sequential data.
Recently, a generalized LSTM, denoted by CLSTM, was developed [14, 17, 18]. CLSTM explicitly
assumes that the input is images and replaces the vector multiplication in LSTM gates by convolutional
operators. It is particularly efficient in exploiting image sequences. For instance, it can be used for
image sequence prediction either in an encoder-decoder framework [17] or by combining with optical
flows [14]. Specifically, CLSTM can be formulated as follows.
?
iz = ?(xz ? Wxi + hz?1 ? Whi + bi )
?
?
?
? fz = ?(xz ? Wxf + hz?1 ? Whf + bf )
cz = cz?1 fz + iz tanh(xz ? Wxc + hz?1 ? Whc + bc )
?
?
? oz = ?(xz ? Wxo + hz?1 ? Who + bo )
?
hz = oz tanh(cz )
4
(1)
Here, ? denotes convolution and denotes element-wise product. ?() and tanh() are logistic
sigmoid and hyperbolic tangent functions; iz , fz , oz are the input gate, forget gate, and output gate,
bi , bf , bc , bo are bias terms, and xz , cz , hz are the input, the cell activation state, and the hidden
state, at slice z. W?? are diagonal weight matrices governing the value transitions. For instance, Whf
controls how the forget gate takes values from the hidden state. The input to CLSTM is a feature
map of size fin ?lin ?win , and the output is a feature map of size fout ?lout ?wout , lout ? lin and
wout ? win . lout and wout depend on the size of the convolution kernels and whether padding is used.
BDC-LSTM: We extend CLSTM to Bi-Directional Convolutional LSTM (BDC-LSTM). The key
extension is to stack two layers of CLSTM, which work in two opposite directions (see Fig. 3(A)).
The contextual information carried in the two layers, one in z ? -direction and the other in z + -direction,
is concatenated as output. It can be interpreted as follows. To determine the hidden state at a slice
z, we take the 2D hierarchical features in slice z (i.e., xz ) and the contextual information from both
the z + and z ? directions. One layer of CLSTM will integrate the information from the z ? -direction
(resp., z + -direction) and xz to capture the minus-side (resp., plus-side) context (see Fig. 3(B)). Then,
the two one-side contexts (z + and z ? ) will be fused.
In fact, Pyramid-LSTM [18] can be viewed as a different extension of CLSTM, which employs six
CLSTMs in six different directions (x+/? , y +/? , and z +/? ) and sums up the outputs of the six
CLSTMs. However, useful information may be lost during the output summation. Intuitively, the sum
of six outputs can only inform a simplified context instead of the exact situations in different directions.
It should be noted that concatenating six outputs may greatly increase the memory consumption, and
is thus impractical in Pyramid-LSTM. Hence, besides avoiding problematic convolutions on the xz
and yz planes (as discussed in Section 1), BDC-LSTM is in principle more effective in exploiting
inter-slice contexts than Pyramid-LSTM.
Deep Architectures: Multiple BDC-LSTMs can be stacked into a deep structure by taking the output
feature map of one BDC-LSTM as the input to another BDC-LSTM. In this sense, each BDC-LSTM
can be viewed as a super ?layer" in the deep structure. Besides simply taking one output as another
input, we can also insert other operations, like max-pooling or deconvolution, in between BDC-LSTM
layers. As a consequence, deep architectures for 2D CNN can be easily migrated or generalized to
build deep architectures for BDC-LSTM. This is shown in Fig. 3(C)-(D). The underlying relationship
between deep BDC-LSTM and 2D deep CNN is that deep CNN extracts a hierarchy of non-linear
features from a 2D image and a deeper layer aims to interpret higher level information of the image,
while deep BDC-LSTM extracts a hierarchy of hierarchical contextual features from the 3D context
and a deeper BDC-LSTM layer seeks to interpret higher level 3D contexts.
In [14, 17, 18], multiple CLSTMs were simply stacked one by one, maybe with different kernel sizes,
in which a CLSTM ?layer? may be viewed as a degenerated BDC-LSTM ?layer?. When considering
the problem in the context of CNN, as discussed above, one can see that no feature hierarchy was
even formed in these simple architectures. Usually, convolutional layers are followed by subsampling,
such as max-pooling, in order to form the hierarchy.
We propose a deep architecture combining max-pooling, dropout and deconvolution layers with the
BDC-LSTM layers. The detailed structure is as follows (the numbers in parentheses indicate the size
changes of the feature map in each 2D slice). Input (64?126?126), dropout layer with p = 0.5, two
BDC-LSTMs with 64 hidden units and 5?5 kernels (64?118?118), 2?2 max-pooling (64?59?59),
dropout layer with p = 0.5, two BDC-LSTMs with 64 hidden units and 5?5 kernels (64?51?51), 2?2
deconvolution (64?102 ?102), dropout layer with p = 0.5, 3?3 convolution layer without recurrent
connections (64?100?100), 1?1 convolution layer without recurrent connections (2?100?100).
(Note: All convolutions in BDC-LSTM use the same kernel size as indicated in the layers.) Thus,
to predict the probability map of a 100?100 region, we need the 126?126 region centered at the
same position as the input. In the evaluation stage, the whole feature map can be processed using the
overlapping-tile strategy [16], because deep BDC-LSTM is fully convolutional along the z-direction.
Suppose the feature map of a whole slice is of size 64?W ?H. The input tensor will be padded with
zeros on the borders to resize into 64?(W +26)?(H +26). Then, a sequence of 64?126 ?126
patches will be processed each time. The results are stitched to form the 3D segmentation.
5
Figure 3: (A) The structure of BDC-LSTM, where two layers of CLSTM modules are connected in a
bi-directional manner. (B) A graphical illustration of information propagation through BDC-LSTM
along the z-direction. (C) The circuit diagram of BDC-LSTM. The green arrows represent the
recurrent connections in opposite directions. When rotating this diagram by 90 degrees, it has a
similar structure of a layer in CNN, except the recurrent connections. (D) The deep structure of
BDC-LSTM used in our method. BDC-LSTM can be stacked in a way analogous to a layer in CNN.
The red arrows are 5 ? 5 convolutions. The yellow and purple arrows indicate max-pooling and
deconvolution, respectively. The rightmost blue arrow indicates a 1 ? 1 convolution. Dropout is
applied (not shown) after the input layer, the max-pooling layer and the deconvolution layer.
2.3
Combining kU-Net and BDC-LSTM
The motivation of solving 3D segmentation by combining FCN (kU-Net) and RNN (BDC-LSTM) is
to distribute the burden of exploiting 3D contexts. kU-Net extracts and compresses the hierarchy of
intra-slice contexts into feature maps, and BDC-LSTM distills the 3D context from a sequence of
abstracted 2D contexts. These two components work coordinately, as follows.
Suppose the 3D image consists of Nz 2D slices of size Nx ? Ny each. First, kU-Net extracts feature
z
maps of size 64 ? Nx ? Ny , denoted by f2D
, from each slice z. The overlapping-tile strategy [16]
will be adopted when the 2D images are too big to be processed by kU-Net in one shot. Second,
z
BDC-LSTM works on f2D
to build the hierarchy of non-linear features from 3D contexts and
z
h
generate another 64 ? Nx ? Ny feature map, denoted by f3D
, z = 1, . . . Nz . For each slice z, f2D
(h = z ??, . . . , z, . . . , z +?) will serve as the context (? = 1 in our implementation). Finally, a
z
softmax function is applied to f3D
to generate the 3D segmentation probability map.
2.4
Training Strategy
Our whole network, including kU-Net and BDC-LSTM, can be trained either end-to-end or in a decoupled manner. Sometimes, biomedical images are too big to be processed as a whole. Overlapping-tile
is a common approach [16], but can also reduce the range of the context utilized by the networks.
The decoupled training, namely, training kU-Net and BDC-LSTM separately, is especially useful
in situations where the effective context of each voxel is very large. Given the same amount of
computing resources (e.g., GPU memory), when allocating all resources to train one component
only, both kU-Net and BDC-LSTM can take much larger tiles as input. In practice, even though the
end-to-end training has its advantage of simplicity and consistency, the decoupled training strategy is
preferred for challenging problems.
kU-Net is initialized using the strategy in [7] and trained using Adam [9], with first moment coefficient
(?1 )=0.9, second moment coefficient (?2 )=0.999, =1e?10, and a constant learning rate 5e?5. The
training method for BDC-LSTM is Rms-prop [6], with smoothing constant (?)=0.9 and =1e?5.
The initial learning rate is set as 1e?3 and halves every 2000 iterations, until 1e?5. In each iteration,
one training example is randomly selected. The training data is augmented with rotation, flipping,
and mirroring. To avoid gradient explosion, the gradient is clipped to [?5, 5] in each iteration. The
parameters in BDC-LSTM are initialized with random values uniformly selected from [?0.02, 0.02].
We use a weighted cross-entropy loss in both the kU-Net and BDC-LSTM training. In biomedical
image segmentation, there may often be certain important regions in which errors should be reduced
6
Table 1: Experimental results on the ISBI neuron dataset and in-house 3D fungus datasets.
Neuron
Fungus
Method
Vrand
Vinf o
Pixel Error
Pyramid-LSTM [18]
U-Net [16]
Tri-Planar [15]
3D Conv [10]
0.9677
0.9728
0.8462
0.8178
0.9829
0.9866
0.9180
0.9125
N/A
0.0263
0.0375
0.0630
Ours (FCN only)
Ours (FCN+simple RNN)
Ours (FCN+deep RNN)
0.9749
0.9742
0.9753
0.9869
0.9869
0.9870
0.0242
0.0241
0.0215
as much as possible. For instance, when two objects touch tightly to each other, it is important to
make correct segmentation along the separating boundary between the two objects, while errors near
the non-touching boundaries are of less importance. Hence, we adopt the idea in [16] to assign a
unique weight for each voxel in the loss calculation.
3
Experiments
Our framework was implemented in Torch7 [5] and the RNN package [12]. We conducted experiments
on a workstation with 12GB NVIDIA TESLA K40m GPU, using CuDNN library (v5) for GPU
acceleration. Our approach was evaluated in two 3D segmentation applications and compared with
several state-of-the-art DL methods.
3D Neuron Structures: The first evaluation dataset was from the ISBI challenge on the segmentation
of neuronal structures in 3D electron microscopic (EM) images [1]. The objective is to segment the
neuron boundaries. Briefly, there are two image stacks of 512 ? 512 ? 30 voxels, where each voxel
measures 4 ? 4 ? 50?m. Noise and section alignment errors exist in both stacks. One stack (with
ground truth) was used for training, and the other was for evaluation. We adopted the same metrics
as in [1], i.e., foreground-restricted rand score (Vrand ) and information theoretic score (Vinf o ) after
border thinning. As shown in [1], Vrand and Vinf o are good approximation to the difficulty for human
to correct the segmentation errors, and are robust to border variations due to the thickness.
3D Fungus Structures: Our method was also evaluated on in-house datasets for the segmentation of
tubular fungus structures in 3D images from Serial Block-Face Scanning Electron Microscope. The
ratio of the voxel scales is x : y : z = 1 : 1 : 3.45. There are five stacks, in all of which each slice is
a grayscale image of 853 ? 877 pixels. We manually labeled the first 16 slices in one stack as the
training data and used the other four stacks, each containing 81 sections, for evaluation. The metric
to quantify the segmentation accuracy is pixel error, defined as the Euclidean distance between the
ground truth label (0 or 1) and segmentation probability (a value in the range of [0, 1]). Note that we
do not use the same metric as the neuron dataset, because the ?border thinning" is not applicable to
the fungus datasets. The pixel error was actually adopted at the time of the ISBI neuron segmentation
challenge, which is also a well-recognized metric to quantify pixel-level accuracy. It is also worth
mentioning that it is impractical to label four stacks for evaluation due to intensive labor. Hence,
we prepared the ground truth every 5 sections in each evaluation stack (i.e., 5, 10, 15, . . ., 75, 80).
Totally, 16 sections were selected to estimate the performance on a whole stack. Namely, all 81
sections in each stack were segmented, but 16 of them were used to compute the evaluation score in
the corresponding stack. The reported performance is the average of the scores for all four stacks.
Recall the four categories of known deep learning based 3D segmentation methods described in
Section 1. We selected one typical method from each category for comparison. (1) U-Net [16],
which achieved the state-of-the-art segmentation accuracy on 2D biomedical images, is selected as
the representative scheme of linking 2D segmentations into 3D results. (Note: We are aware of the
method [3] which is another variant of 2D FCN and achieved excellent performance on the neuron
dataset. But, different from U-Net, the generality of [3] in different applications is not yet clear. Our
test of [3] on the in-house datasets showed an at least 5% lower F1-score than U-Net. Thus, we
decided to take U-Net as the representative method in this category.) (2) 3D-Conv [10] is a method
using CNN with 3D convolutions. (3) Tri-planar [15] is a classic solution to avoid high computing
7
Figure 4: (A) A cropped region in a 2D fungus image. (B) The result using only the FCN component.
(C) The result of combining FCN and RNN. (D) The true fungi to be segmented in (A).
costs of 3D convolutions, which replaces 3D convolution with three 2D convolutions on orthogonal
planes. (4) Pyramid-LSTM [18] is the best known generalized LSTM networks for 3D segmentation.
Results: The results on the 3D neuron dataset and the fungus datasets are shown in Table 1. It is
evident that our proposed kU-Net, when used alone, achieves considerable improvement over U-Net
[16]. Our approach outperforms the known DL methods utilizing 3D contexts. Moreover, one can
see that our proposed deep architecture achieves better performance than simply stacking multiple
BDC-LSTMs together. As discussed in Section 2.2, adding subsampling layers like in 2D CNN
makes the RNN component able to perceive higher level 3D contexts. It worth mentioning that our
two evaluation datasets are quite representative. The fungus data has small anisotropism (z resolution
is close to xy resolution). The 3D neuron dataset has large anisotropism (z resolution is much less
than xy resolution). The effectiveness of our framework on handling and leveraging anisotropism
can be demonstrated.
We should mention that we re-implemented Pyramid-LSTM [18] in Torch7 and tested it on the fungus
datasets. But, the memory requirement of Pyramid-LSTM, when implemented in Torch7, was too
large for our GPU. For the original network structure, the largest possible cubical region to process
each time within our GPU memory capacity was 40 ? 40 ? 8. Using the same hyper-parameters
in [18], we cannot obtain acceptable results due to the limited processing cube. (The result of
Pyramid-LSTM on the 3D neuron dataset was fetched from the ISBI challenge leader board1 on
May 10, 2016.) Here, one may see that our method is much more efficient in GPU memory, when
implemented under the same deep learning framework and tested on the same machine.
Some results are shown in Fig. 4 to qualitatively compare the results using the FCN component
alone and the results of combining RNN and FCN. In general, both methods make nearly no false
negative errors. But, the RNN component can help to (1) suppress false positive errors by maintaining
inter-slice consistency, and (2) make more confident prediction in ambiguous cases by leveraging
the 3D context. In a nutshell, FCN collects as much discriminative information as possible within
each slice and RNN makes further refinement according to inter-slice correlation, so that an accurate
segmentation can be made at each voxel.
4
Conclusions and Future Work
In this paper, we introduce a new deep learning framework for 3D image segmentation, based on
a combination of an FCN (i.e., kU-Net) to exploit 2D contexts and an RNN (i.e., BDC-LSTM) to
integrate contextual information along the z-direction. Evaluated in two different 3D biomedical
image segmentation applications, our proposed approach can achieve the state-of-the-art performance
and outperform known DL schemes utilizing 3D contexts. Our framework provides a new paradigm
to migrate the superior performance of 2D deep architectures to exploit 3D contexts. Following
this new paradigm, we will explore BDC-LSTMs in different deep architectures to achieve further improvement and conduct more extensive evaluations on different datasets, such as BraTS
(http://www.braintumorsegmentation.org/) and MRBrainS (http://mrbrains13.isi.uu.nl).
5
Acknowledgement
This research was support in part by NSF Grants CCF-1217906 and CCF-1617735 and NIH Grants
R01-GM095959 and U01-HL116330. Also, we would like to thank Dr. Viorica Patraucean at
University of Cambridge (UK) for discussion of BDC-LSTM, and Prof. David P. Hughes and
Dr. Maridel Fredericksen at Pennsylvania State University (US) for providing the 3D fungus datasets.
1
http://brainiac2.mit.edu/isbi_challenge/leaders-board-new
8
References
[1] A. Cardona, S. Saalfeld, S. Preibisch, B. Schmid, A. Cheng, J. Pulokas, P. Tomancak, and V. Hartenstein.
An integrated micro-and macroarchitectural analysis of the drosophila brain by computer-assisted serial
section electron microscopy. PLoS Biol, 8(10):e1000502, 2010.
[2] H. Chen, X. Qi, L. Yu, and P.-A. Heng. Dcan: Deep contour-aware networks for accurate gland segmentation. arXiv preprint arXiv:1604.02677, 2016.
[3] H. Chen, X. J. Qi, J. Z. Cheng, and P. A. Heng. Deep contextual networks for neuronal structure
segmentation. In AAAI Conference on Artificial Intelligence, 2016.
[4] D. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Deep neural networks segment neuronal
membranes in electron microscopy images. In NIPS, pages 2843?2851, 2012.
[5] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A Matlab-like environment for machine learning.
In BigLearn, NIPS Workshop, 2011.
[6] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rmsprop and equilibrated adaptive learning rates for
non-convex optimization. arXiv preprint arXiv:1502.04390, 2015.
[7] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on
imagenet classification. In CVPR, pages 1026?1034, 2015.
[8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, 1997.
[9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[10] M. Lai. Deep learning for medical image segmentation. arXiv preprint arXiv:1505.02000, 2015.
[11] K. Lee, A. Zlateski, V. Ashwin, and H. S. Seung. Recursive training of 2D-3D convolutional networks for
neuronal boundary prediction. In NIPS, pages 3559?3567, 2015.
[12] N. L?onard, S. Waghmare, and Y. Wang. rnn: Recurrent library for Torch. arXiv preprint arXiv:1511.07889,
2015.
[13] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR,
pages 3431?3440, 2015.
[14] V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory.
arXiv preprint arXiv:1511.06309, 2015.
[15] A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam, and M. Nielsen. Deep feature learning for knee
cartilage segmentation using a triplanar convolutional neural network. In MICCAI, pages 246?253, 2013.
[16] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234?241, 2015.
[17] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W. chun Woo. Convolutional lstm network: A
machine learning approach for precipitation nowcasting. arXiv preprint arXiv:1506.04214, 2015.
[18] M. F. Stollenga, W. Byeon, M. Liwicki, and J. Schmidhuber. Parallel multi-dimensional LSTM, with
application to fast biomedical volumetric image segmentation. In NIPS, pages 2980?2988, 2015.
[19] C. Sun, M. Paluri, R. Collobert, R. Nevatia, and L. Bourdev. Pronet: Learning to propose object-specific
boxes for cascaded neural networks. arXiv preprint arXiv:1511.03776, 2015.
9
| 6448 |@word trial:1 cnn:10 briefly:1 illustrating:1 fcns:4 nd:5 bf:2 seek:1 propagate:1 mention:1 minus:1 shot:1 moment:2 initial:1 series:2 score:5 bc:2 ours:3 rightmost:1 outperforms:1 comparing:3 contextual:6 activation:1 yet:1 danny:1 finest:1 gpu:6 subsequent:1 shape:3 alone:2 half:1 fewer:1 selected:5 intelligence:1 plane:12 isotropic:4 short:2 provides:1 lauze:1 org:1 zhang:2 five:1 along:7 consists:2 combine:1 manner:2 introduce:2 inter:8 paluri:1 isi:1 xz:14 behavior:2 multi:6 brain:1 inspired:1 window:2 considering:1 totally:1 conv:2 spain:1 precipitation:1 underlying:1 moreover:1 circuit:1 lowest:1 what:1 interpreted:1 substantially:1 developed:1 impractical:2 temporal:1 remember:1 every:4 nutshell:1 uk:1 control:1 unit:2 grant:2 medical:1 organize:2 segmenting:1 t1:2 positive:1 consequence:1 troublesome:1 chose:1 plus:1 nz:2 collect:1 challenging:1 mentioning:2 limited:1 bi:4 range:2 igel:1 decided:1 unique:1 responsible:3 practice:2 lost:1 whc:1 block:1 hughes:1 recursive:1 rnn:25 attain:1 hyperbolic:1 onard:1 pre:1 word:1 ronneberger:1 downsampled:2 upsampled:1 petersen:1 cannot:2 close:1 operator:1 context:40 dam:1 wong:1 www:1 map:17 demonstrated:1 shi:1 convex:1 resolution:7 simplicity:1 knee:2 perceive:2 insight:1 utilizing:2 classic:3 handle:1 variation:1 analogous:1 resp:2 target:3 hierarchy:8 suppose:2 saalfeld:1 exact:1 us:7 fout:1 element:1 particularly:1 utilized:1 coarser:3 labeled:1 observed:1 module:1 preprint:8 wang:2 capture:2 region:8 connected:2 sun:2 plo:1 highest:1 environment:1 rmsprop:1 nowcasting:1 seung:1 trained:2 depend:1 solving:1 segment:2 incur:1 serve:2 easily:1 various:1 represented:1 derivation:1 stacked:3 train:1 fast:1 effective:4 describe:1 liwicki:1 artificial:1 hyper:1 quite:1 whi:1 larger:4 solve:1 cvpr:2 encoder:1 ability:1 fischer:1 final:3 differentiate:1 sequence:9 advantage:2 differentiable:1 net:77 propose:6 product:1 relevant:1 combining:9 achieve:4 oz:3 exploiting:4 requirement:1 darrell:1 adam:2 rotated:1 object:14 help:1 recurrent:11 bourdev:1 propagating:1 equilibrated:1 implemented:4 skip:1 indicate:2 uu:1 quantify:2 direction:17 concentrate:1 correct:2 stochastic:1 centered:1 human:6 require:1 assign:1 f1:1 drosophila:1 summation:1 whf:2 extension:2 insert:1 migrating:1 assisted:1 ground:4 predict:1 electron:4 achieves:3 adopt:1 smallest:1 applicable:1 label:4 tanh:3 largest:2 create:1 exceptional:1 isbi_challenge:1 weighted:1 mit:1 biglearn:1 aim:3 super:2 avoid:2 incompatibility:1 resized:1 improvement:2 indicates:1 mainly:1 greatly:1 contrast:1 sense:1 integrated:1 torch:1 hidden:5 i1:1 hartenstein:1 pixel:7 issue:3 classification:2 overall:1 dauphin:1 denoted:4 art:4 softmax:3 spatial:1 brox:1 smoothing:1 cube:1 field:2 construct:2 once:1 extraction:2 aware:2 manually:1 yu:1 prasoon:1 nearly:1 fcn:32 foreground:1 future:1 t2:2 micro:1 employ:3 randomly:1 tightly:1 zoom:2 alber:1 interest:1 huge:1 highly:2 intra:7 evaluation:9 alignment:1 introduces:1 notre:5 nl:1 stitched:1 stollenga:1 accurate:3 allocating:1 explosion:1 xy:5 orthogonal:4 decoupled:3 submodule:10 iv:1 conduct:2 euclidean:1 rotating:1 initialized:2 re:1 instance:4 cardona:1 cost:2 stacking:1 distill:1 byeon:1 conducted:2 too:3 reported:1 thickness:1 scanning:1 considerably:1 combined:1 confident:1 fundamental:2 lstm:63 systematic:1 lee:1 together:1 fused:1 aaai:1 containing:1 tile:4 dr:2 expert:2 chung:1 nevatia:1 distribute:1 de:1 u01:1 coefficient:2 explicitly:3 stream:3 piece:2 collobert:2 view:3 doing:1 red:1 start:1 maintains:1 parallel:2 collaborative:1 appended:1 formed:1 purple:1 accuracy:5 convolutional:16 who:1 efficiently:3 correspond:3 identify:1 directional:2 yellow:1 pulokas:1 kavukcuoglu:1 ren:1 cubical:1 worth:2 finer:3 classified:1 inform:1 farabet:1 volumetric:1 e1000502:1 wxo:1 workstation:1 propagated:1 dataset:8 f2d:3 recall:1 knowledge:1 segmentation:51 nielsen:1 actually:1 thinning:2 brat:1 fredericksen:1 higher:4 patraucean:2 planar:5 methodology:1 rand:1 evaluated:3 though:1 box:1 generality:3 governing:1 biomedical:19 stage:1 miccai:2 correlation:3 until:1 hand:1 lstms:5 touch:1 overlapping:3 propagation:1 logistic:1 gland:1 indicated:1 true:1 ccf:2 hence:3 semantic:1 during:1 self:1 ambiguous:1 noted:1 clstm:13 generalized:7 fungus:11 evident:1 theoretic:1 brainiac2:1 image:58 wise:1 handa:1 recently:1 nih:1 common:4 sigmoid:1 rotation:1 superior:1 overview:1 exponentially:1 anisotropic:4 linking:2 extend:1 discussed:3 he:1 interpret:3 surpassing:1 refer:1 cambridge:1 preibisch:1 ashwin:1 consistency:2 maxpooling:1 showed:1 touching:1 perspective:1 irrelevant:1 schmidhuber:3 certain:1 nvidia:1 outperforming:1 success:2 seen:1 employed:2 recognized:1 determine:1 paradigm:3 gambardella:1 ii:1 multiple:5 segmented:2 calculation:1 cross:1 long:4 lin:3 tubular:1 lai:1 serial:2 parenthesis:1 schematic:1 prediction:3 variant:3 crop:1 qi:2 essentially:1 metric:4 arxiv:16 iteration:3 kernel:7 cz:4 represent:1 pyramid:11 achieved:5 cell:1 sometimes:1 microscope:1 cropped:2 microscopy:2 separately:1 hochreiter:1 diagram:2 tri:5 pooling:7 tend:1 hz:6 tomancak:1 leveraging:3 spirit:1 effectiveness:1 flow:1 call:1 extracting:1 near:1 yang:1 leverage:3 intermediate:1 iii:1 bengio:1 fungi:1 architecture:16 pennsylvania:1 opposite:2 ciresan:1 reduce:2 idea:3 intensive:2 whether:1 six:6 rms:1 assist:2 torch7:4 padding:1 gb:1 giusti:1 suffer:2 cause:1 matlab:1 deep:31 mirroring:1 useful:2 migrate:1 detailed:2 clear:1 maybe:1 amount:1 prepared:1 processed:4 category:4 reduced:1 generate:2 fz:3 outperform:1 exist:2 http:3 problematic:3 nsf:1 blue:1 broadly:1 iz:3 key:5 four:9 achieving:1 distills:1 imaging:3 fuse:3 padded:1 convert:1 sum:2 package:1 pronet:3 clipped:1 patch:1 fetched:1 resize:1 acceptable:1 dropout:5 layer:34 dame:5 followed:2 cheng:2 replaces:2 assemble:1 vinf:3 scene:1 simulate:2 extremely:1 performing:1 coarsest:1 optical:1 bdc:44 according:1 combination:3 poor:1 wxi:1 across:1 membrane:1 em:1 intuitively:2 restricted:1 resource:2 discus:2 mechanism:2 fed:1 end:5 adopted:4 operation:2 apply:1 hierarchical:4 simulating:1 gate:5 original:6 compress:2 assumes:1 denotes:2 subsampling:2 graphical:1 maintaining:1 exploit:9 concatenated:1 yz:5 build:3 especially:3 prof:1 r01:1 zlateski:1 tensor:1 objective:1 v5:1 flipping:1 strategy:7 diagonal:1 cudnn:1 gradient:2 win:2 microscopic:1 distance:1 thank:1 separating:1 upsampling:2 decoder:1 capacity:1 consumption:2 nx:3 mrbrains13:1 degenerated:1 besides:2 relationship:1 illustration:1 ratio:1 downsampling:2 providing:1 mostly:1 negative:1 ba:1 suppress:1 implementation:1 perform:3 convolution:27 neuron:10 commensurate:3 discarded:1 fin:1 datasets:9 situation:2 extended:1 stack:15 david:1 namely:3 wxf:1 connection:6 extensive:1 imagenet:1 tremendous:1 barcelona:1 kingma:1 nip:5 address:1 able:2 usually:1 perception:2 lout:3 challenge:5 including:2 memory:10 green:2 max:7 video:1 suitable:1 critical:1 natural:2 hybrid:1 difficulty:1 cascaded:1 wout:3 scheme:7 library:2 axis:2 carried:1 woo:1 extract:8 schmid:1 autoencoder:1 yeung:1 review:1 literature:1 voxels:1 tangent:1 f3d:2 multiplication:1 acknowledgement:1 macroarchitectural:1 fully:6 loss:2 isbi:5 shelhamer:1 integrate:2 degree:1 sufficient:1 propagates:1 principle:1 systematically:1 heng:2 free:1 copy:1 bias:1 side:3 deeper:2 wide:1 taking:2 face:1 slice:36 boundary:5 dimension:2 depth:2 evaluating:1 transition:1 k40m:1 contour:1 qualitatively:1 clue:1 refinement:1 simplified:1 made:1 adaptive:1 voxel:8 preferred:1 status:1 keep:1 dealing:1 abstracted:1 overfitting:1 sequentially:1 viorica:1 spatio:1 discriminative:2 leader:2 grayscale:1 table:3 promising:1 ku:27 channel:1 robust:1 delving:1 clstms:3 excellent:1 hierarchically:1 arrow:5 whole:6 border:4 motivation:1 big:2 noise:1 tesla:1 neuronal:6 fig:6 representative:4 wxc:1 augmented:1 board:1 fashion:1 ny:3 position:1 concatenating:2 house:4 third:1 rectifier:1 specific:1 chun:1 dl:17 workshop:1 consist:1 incorporating:1 false:2 essential:1 sequential:2 effectively:1 deconvolution:5 burden:1 importance:1 texture:1 adding:1 illumination:1 vries:1 margin:1 chen:5 entropy:1 forget:2 simply:4 appearance:2 explore:1 labor:1 bo:2 applies:1 cipolla:1 corresponds:1 truth:4 extracted:6 prop:1 yizhe:1 viewed:4 formulated:1 consequently:1 acceleration:1 replace:1 shared:2 considerable:1 change:1 specifically:2 perceiving:1 typical:2 except:1 acting:1 uniformly:1 called:4 experimental:1 uneven:1 internal:1 mark:1 support:1 avoiding:1 tested:2 biol:1 handling:1 |
6,023 | 6,449 | Clustering with Same-Cluster Queries
Hassan Ashtiani , Shrinu Kushagra and Shai Ben-David
David R. Cheriton School of Computer Science
University of Waterloo,
Waterloo, Ontario, Canada
{mhzokaei,skushagr,shai}@uwaterloo.ca
Abstract
We propose a framework for Semi-Supervised Active Clustering framework
(SSAC), where the learner is allowed to interact with a domain expert, asking
whether two given instances belong to the same cluster or not. We study the query
and computational complexity of clustering in this framework. We consider a
setting where the expert conforms to a center-based clustering with a notion of
margin. We show that there is a trade off between computational complexity and
query complexity; We prove that for the case of k-means clustering (i.e., when the
expert conforms to a solution of k-means), having access to relatively few such
queries allows efficient solutions to otherwise NP hard problems.
In particular, we provide a probabilistic polynomial-time (BPP) algorithm for
clustering in this setting that asks O k 2 log k + k log n) same-cluster queries and
runs with time complexity O kn log n) (where k is the number of clusters and
n is the number of instances). The algorithm succeeds with high probability for
data satisfying margin conditions under which, without queries, we show that the
problem is NP hard. We also prove a lower bound on the number of queries needed
to have a computationally efficient clustering algorithm in this setting.
1
Introduction
Clustering is a challenging task particularly due to two impediments. The first problem is that
clustering, in the absence of domain knowledge, is usually an under-specified task; the solution
of choice may vary significantly between different intended applications. The second one is that
performing clustering under many natural models is computationally hard.
Consider the task of dividing the users of an online shopping service into different groups. The result
of this clustering can then be used for example in suggesting similar products to the users in the same
group, or for organizing data so that it would be easier to read/analyze the monthly purchase reports.
Those different applications may result in conflicting solution requirements. In such cases, one needs
to exploit domain knowledge to better define the clustering problem.
Aside from trial and error, a principled way of extracting domain knowledge is to perform clustering
using a form of ?weak? supervision. For example, Balcan and Blum [BB08] propose to use an
interactive framework with ?split/merge? queries for clustering. In another work, Ashtiani and
Ben-David [ABD15] require the domain expert to provide the clustering of a ?small? subset of data.
At the same time, mitigating the computational problem of clustering is critical. Solving most of
the common optimization formulations of clustering is NP-hard (in particular, solving the popular
k-means and k-median clustering problems). One approach to address this issues is to exploit the
fact that natural data sets usually exhibit some nice properties and likely to avoid the worst-case
scenarios. In such cases, optimal solution to clustering may be found efficiently. The quest for notions
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of niceness that are likely to occur in real data and allow clustering efficiency is still ongoing (see
[Ben15] for a critical survey of work in that direction).
In this work, we take a new approach to alleviate the computational problem of clustering. In
particular, we ask the following question: can weak supervision (in the form of answers to natural
queries) help relaxing the computational burden of clustering? This will add up to the other benefit
of supervision: making the clustering problem better defined by enabling the accession of domain
knowledge through the supervised feedback.
The general setting considered in this work is the following. Let X be a set of elements that should
be clustered and d a dissimilarity function over it. The oracle (e.g., a domain expert) has some
?
information about a target clustering CX
in mind. The clustering algorithm has access to X, d, and
?
can also make queries about CX . The queries are in the form of same-cluster queries. Namely, the
algorithm can ask whether two elements belong to the same cluster or not. The goal of the algorithm
is to find a clustering that meets some predefined clusterability conditions and is consistent with the
answers given to its queries.
We will also consider the case that the oracle conforms with some optimal k-means solution. We
then show that access to a ?reasonable? number of same-cluster queries can enable us to provide an
efficient algorithm for otherwise NP-hard problems.
1.1
Contributions
The two main contributions of this paper are the introduction of the semi-supervised active clustering
(SSAC) framework and, the rather unusual demonstration that access to simple query answers can
turn an otherwise NP hard clustering problem into a feasible one.
Before we explain those results, let us also mention a notion of clusterability (or ?input niceness?)
that we introduce. We define a novel notion of niceness of data, called ?-margin property that is
related to the previously introduced notion of center proximity [ABS12]. The larger the value of
?, the stronger the assumption becomes, which means that clustering becomes easier. With respect
to that ? parameter, we get a sharp ?phase transition? between k-means being NP hard and being
optimally solvable in polynomial time1 .
We focus on the effect of using queries on the computational complexity of clustering. We provide
a probabilistic polynomial time (BPP) algorithm for clustering with queries, that succeeds under
the assumption that the input satisfies the ?-margin condition for ? > 1. This algorithm makes
O k 2 log k + k log n) same-cluster queries to the oracle and runs in O kn log n) time, where k is
the number of clusters and n is the size of the instance set.
On the other hand, we show that without access to query answers,
k-means clustering is NP-hard
?
even when the solution satisfies ?-margin property for ? = 3.4 ? 1.84 and k = ?(n ) (for any
? (0, 1)). We further show that access to ?(log k + log n) queries is needed to overcome the NP
hardness in that case. These results, put together, show an interesting phenomenon. Assume that
the oracle conforms to an optimal
solution of k-means clustering and that it satisfies the ?-margin
?
property for some 1 < ? ? 3.4. In this case, our lower bound means that without making queries
k-means clustering is NP-hard, while the positive result shows that with a reasonable number of
queries the problem becomes efficiently solvable.
This indicates an interesting (and as far as we are aware, novel) trade-off between query complexity
and computational complexity in the clustering domain.
1.2
Related Work
This work combines two themes in clustering research; clustering with partial supervision (in
particular, supervision in the form of answers to queries) and the computational complexity of
clustering tasks.
Supervision in clustering (sometimes also referred to as ?semi-supervised clustering?) has been
addressed before, mostly in application-oriented works [BBM02, BBM04, KBDM09]. The most
1
The exact value of such a threshold ? depends on some finer details of the clustering task; whether d is
required to be Euclidean and whether the cluster centers must be members of X.
2
common method to convey such supervision is through a set of pairwise link/do-not-link constraints
on the instances. Note that in contrast to the supervision we address here, in the setting of the papers
cited above, the supervision is non-interactive. On the theory side, Balcan et. al [BB08] propose a
framework for interactive clustering with the help of a user (i.e., an oracle). The queries considered in
that framework are different from ours. In particular, the oracle is provided with the current clustering,
and tells the algorithm to either split a cluster or merge two clusters. Note that in that setting, the
oracle should be able to evaluate the whole given clustering for each query.
Another example of the use of supervision in clustering was provided by Ashtiani and Ben-David
[ABD15]. They assumed that the target clustering can be approximated by first mapping the data
points into a new space and then performing k-means clustering. The supervision is in the form of a
clustering of a small subset of data (the subset provided by the learning algorithm) and is used to
search for such a mapping.
Our proposed setup combines the user-friendliness of link/don?t-link queries (as opposed to asking
the domain expert to answer queries about whole data set clustering, or to cluster sets of data) with
the advantages of interactiveness.
The computational complexity of clustering has been extensively studied. Many of these results
are negative, showing that clustering is computationally hard. For example, k-means clustering is
NP-hard even for k = 2 [Das08], or in a 2-dimensional plane [Vat09, MNV09]. In order to tackle the
problem of computational complexity, some notions of niceness of data under which the clustering
becomes easy have been considered (see [Ben15] for a survey).
The closest proposal to this work is the notion of ?-center proximity introduced by Awasthi et. al
[ABS12]. We discuss the relationship of that notion to our notion of margin in Appendix B. In the
restricted scenario (i.e., when the centers of clusters are selected from the data set), their algorithm
efficiently recovers the target clustering (outputs a tree such that the target
is a pruning of the tree) for
?
? > 3. Balcan and Liang [BL12] improve the assumption to ? > 2 + 1. Ben-David and Reyzin
[BDR14] show that this problem is NP-Hard for ? < 2.
Variants of these proofs for our ?-margin condition yield the feasibility of k-means clustering when
the input satisfies the condition with ? > 2 and NP hardness when ? < 2, both in the case of arbitrary
(not necessarily Euclidean) metrics2 .
2
Problem Formulation
2.1
Center-based clustering
The framework of clustering with queries can be applied to any type of clustering. However, in this
work, we focus on a certain family of common clusterings ? center-based clustering in Euclidean
spaces3 .
Let X be a subset of some Euclidean space, Rd . Let CX = {C1 , . . . , Ck } be a clustering (i.e., a
C
partitioning) of X . We say x1 ?X x2 if x1 and x2 belong to the same cluster according to CX . We
further denote by n the number of instances (|X |) and by k the number of clusters.
We say that a clustering CX is center-based if there exists a set of centers ? = {?1 , . . . , ?k } ? Rn
such that the clustering corresponds to the Voroni diagram over those center points. Namely, for
every x in X and i ? k, x ? Ci ? i = arg minj d(x, ?j ).
Finally, we assume that the centers ?? corresponding
to C ? are the centers of mass of the correspondP
ing clusters. In other words, ??i = |C1i | x?C ? x. Note that this is the case for example when the
i
oracle?s clustering is the optimal solution to the Euclidean k-means clustering problem.
The ?-margin property
2.2
Next, we introduce a notion of clusterability of a data set, also referred to as ?data niceness property?.
2
In particular, the hardness result of [BDR14] relies?
on the ability to construct non-Euclidean distance
functions. Later in this paper, we prove hardness for ? ? 3.4 for Euclidean instances.
3
In fact, our results are all independent of the Euclidean dimension and apply to any Hilbert space.
3
Definition 1 (?-margin). Let X be set of points in metric space M . Let CX = {C1 , . . . , Ck } be
a center-based clustering of X induced by centers ?1 , . . . , ?k ? M . We say that CX satisfies the
?-margin property if the following holds. For all i ? [k] and every x ? Ci and y ? X \ Ci ,
?d(x, ?i ) < d(y, ?i )
Similar notions have been considered before in the clustering literature. The closest one to our
?-margin is the notion of ?-center proximity [BL12, ABS12]. We discuss the relationship between
these two notions in appendix B.
2.3
The algorithmic setup
For a clustering C ? = {C1? , . . . Ck? }, a C ? -oracle is a function OC ? that answers queries according
to that clustering. One can think of such an oracle as a user that has some idea about its desired
clustering, enough to answer the algorithm?s queries. The clustering algorithm then tries to recover
C ? by querying a C ? -oracle. The following notion of query is arguably most intuitive.
Definition 2 (Same-cluster Query). A same-cluster query asks whether two instances x1 and x2
belong to the same cluster, i.e.,
C?
true
if x1 ? x2
OC ? (x1 , x2 ) =
false o.w.
(we omit the subscript C ? when it is clear from the context).
Definition 3 (Query Complexity). An SSAC instance is determined by the tuple (X , d, C ? ). We will
consider families of such instances determined by niceness conditions on their oracle clusterings C ? .
1. A SSAC algorithm A is called a q-solver for a family G of such instances, if for every
instance (X , d, C ? ) ? G, it can recover C ? by having access to (X , d) and making at most
q queries to a C ? -oracle.
2. Such an algorithm is a polynomial q-solver if its time-complexity is polynomial in |X | and
|C ? | (the number of clusters).
3. We say G admits an O(q) query complexity if there exists an algorithm A that is a polynomial
q-solver for every clustering instance in G.
3
An Efficient SSAC Algorithm
In this section we provide an efficient algorithm for clustering with queries. The setting is the one
described in the previous section. In particular, it is assumed that the oracle has a center-based
clustering in his mind which satisfies the ?-margin property. The space is Euclidean and the center
of each cluster is the center of mass of the instances in that cluster. The algorithm not only makes
same-cluster queries, but also another type of query defined as below.
Definition 4 (Cluster-assignment Query). A cluster-assignment query asks the cluster index that an
instance x belongs to. In other words OC ? (x) = i if and only if x ? Ci? .
Note however that each cluster-assignment query can be replaced with k same-cluster queries (see
appendix A in supplementary material). Therefore, we can express everything in terms of the more
natural notion of same-cluster queries, and the use of cluster-assignment query is just to make the
representation of the algorithm simpler.
Intuitively, our proposed algorithm does the following. In the first phase, it tries to approximate the
center of one of the clusters. It does this by asking cluster-assignment queries about a set of randomly
(uniformly) selected point, until it has a sufficient number of points from at least one cluster (say Cp ).
It uses the mean of these points, ?0p , to approximate the cluster center.
In the second phase, the algorithm recovers all of the instances belonging to Cp . In order to do that, it
first sorts all of the instances based on their distance to ?0p . By showing that all of the points in Cp lie
inside a sphere centered at ?0p (which does not include points from any other cluster), it tries to find
4
the radius of this sphere by doing binary search using same-cluster queries. After that, the elements
in Cp will be located and can be removed from the data set. The algorithm repeats this process k
times to recover all of the clusters.
The details of our approach is stated precisely in Algorithm 1. Note that ? is a small constant4 .
Theorem 7 shows that if ? > 1 then our algorithm recovers the target clustering with high probability.
Next, we give bounds on the time and query complexity of our algorithm. Theorem 8 shows that our
approach needs O(k log n + k 2 log k) queries and runs with time complexity O(kn log n).
Algorithm 1: Algorithm for ?(> 1)-margin instances with queries
Input: Clustering instance X , oracle O, the number of clusters k and parameter ? ? (0, 1)
Output: A clustering C of the set X
C = {}, S1 = X , ? = ? log k+log(1/?)
(??1)4
for i = 1 to k do
Phase 1
l = k? + 1;
Z ? U l [Si ] // Draws l independent elements from Si uniformly at random
For 1 ? t ? i,
Zt = {x ? Z : O(x) = t}. //Asks cluster-assignment queries about the members of Z
p = arg max
Pt |Zt |
?0p := |Z1p | x?Zp x.
Phase 2
// We know that there exists ri such that ?x ? Si , x ? Ci ? d(x, ?0i ) < ri .
// Therefore, ri can be found by simple binary search
Sbi = Sorted({Si }) // Sorts elements of {x : x ? Si } in increasing order of d(x, ?0p ).
ri = BinarySearch(Sbi ) //This step takes up to O(log |Si |) same-cluster queries
Cp0 = {x ? Si : d(x, ?0p ) ? ri }.
Si+1 = Si \ Cp0 .
C = C ? {Cp0 }
end
Lemma 5. Let (X , d, C) be a clustering instance, where C is center-based and satisfies the ?-margin
property. Let ? be the set of centers corresponding to the centers of mass of C. Let ?0i be such that
d(?i , ?0i ) ? r(Ci ), where r(Ci ) = maxx?Ci d(x, ?i ) . Then ? ? 1 + 2 implies that
?x ? Ci , ?y ? X \ Ci ? d(x, ?0i ) < d(y, ?0i )
Proof. Fix any x ? Ci and y ? Cj . d(x, ?0i ) ? d(x, ?i ) + d(?i , ?0i ) ? r(Ci )(1 + ). Similarly,
d(y, ?0i ) ? d(y, ?i ) ? d(?i , ?0i ) > (? ? )r(Ci ). Combining the two, we get that d(x, ?0i ) <
1+
0
?? d(y, ?i ).
Lemma 6. Let the framework be as in Lemma 5. Let Zp , Cp , ?p , ?0p and ? be defined as in Algorhtm
?
0
1, and = ??1
2 . If |Zp | > ?, then the probability that d(?p , ?p ) > r(Cp ) is at most k .
Proof. Define a uniform distribution U over Cp . Then ?p and ?0p are the true and empirical mean of
this distribution. Using a standard concentration inequality (Thm. 12 from Appendix D) shows that
the empirical mean is close to the true mean, completing the proof.
Theorem 7. Let (X , d, C) be a clustering instance, where C is center-based and satisfies the ?margin property. Let ?i be the center of mass of Ci . Assume ? ? (0, 1) and ? > 1. Then with
probability at least 1 ? ?, Algorithm 1 outputs C.
4
It corresponds to the constant appeared in generalized Hoeffding inequality bound, discussed in Theorem
12 in appendix D in supplementary materials.
5
Proof. In the first phase of the algorithm we are making l > k? cluster-assignment queries. Therefore,
using the pigeonhole principle, we know that there exists cluster index p such that |Zp | > ?. Then
Lemma 6 implies that the algorithm chooses a center ?0p such that with probability at least 1 ? k? we
have d(?p , ?0p ) ? r(Cp ). By Lemma 5, this would mean that d(x, ?0p ) < d(y, ?0p ) for all x ? Cp
and y 6? Cp . Hence, the radius ri found in the phase two of Alg. 1 is such that ri = max d(x, ?0p ).
x?Cp
This implies that Cp0 (found in phase two) equals to Cp . Hence, with probability at least 1 ? k? one
iteration of the algorithm successfully finds all the points in a cluster Cp . Using union bound, we get
that with probability at least 1 ? k k? = 1 ? ?, the algorithm recovers the target clustering.
Theorem 8. Let the framework be as in theorem 7. Then Algorithm 1
? Makes O k log n + k 2 log k+log(1/?)
same-cluster queries to the oracle O.
(??1)4
2 log k+log(1/?)
? Runs in O kn log n + k
time.
(??1)4
Proof. In each iteration (i) the first phase of the algorithm takes O(?) time and makes ? + 1 clusterassignment queries (ii) the second phase takes O(n log n) times and makes O(log n) same-cluster
queries. Each cluster-assignment query can be replaced with k same-cluster queries; therefore,
each iteration runs in O(k? + n log n) and uses O(k? + log n) same-cluster queries. By replacing
? = ? log k+log(1/?)
and noting that there are k iterations, the proof will be complete.
(??1)4
Corollary 9. The set of Euclidean clustering instances that satisfy the ?-margin property for some
? > 1 admits query complexity O k log n + k 2 log k+log(1/?)
.
(??1)4
4
4.1
Hardness Results
Hardness of Euclidean k-means with Margin
Finding k-means solution without the help of an oracle is generally computationally hard. In this
section, we will show that solving Euclidean k-means
? remains hard even if we know that the optimal
solution satisfies the ?-margin property for ? = 3.4. In particular, we show the hardness for the
case of k = ?(n ) for any ? (0, 1).
In Section 3, we proposed a polynomial-time algorithm that could recover the target clustering using
O(k 2 log k + k log n) queries, assuming that the clustering satisfies the ?-margin property for ? > 1.
Now assume
? that the oracle conforms to the optimal k-means clustering solution. In this case, for
1 < ? ? 3.4 ? 1.84, solving k-means clustering would be NP-hard without queries, while it
becomes efficiently solvable with the help of an oracle 5 .
Given a set of instances X ? Rd , theP
k-means P
clustering problem is to find a clustering C =
{C1 , . . . , Ck } which minimizes f (C) =
min
kx ? ?i k22 . The decision version of k-means
d
Ci ?i ?R x?Ci
is, given some value L, is there a clustering C with cost ? L? The following theorem is the main
result of this section.
Theorem 10. Finding the optimal solution to Euclidean k-means objective function is NP-hard when
k =?
?(n ) for any ? (0, 1), even when the optimal solution satisfies the ?-margin property for
? = 3.4.
This results extends the hardness result of [BDR14] to the case of Euclidean metric, rather than
arbitrary one, and to the ?-margin condition (instead of the ?-center proximity there). The full proof
is rather technical and is deferred to the supplementary material (appendix C).
5
To be precise, note that the algorithm used for clustering with queries is probabilistic, while the lower bound
that we provide is for deterministic algorithms. However, this implies a lower bound for randomized algorithms
as well unless BP P 6= P
6
4.1.1
Overview of the proof
Our method to prove Thm. 10 is based on the approach employed by [Vat09]. However, the original
construction proposed in [Vat09] does not satisfy the ?-margin property. Therefore, we have to
modify the proof by setting up the parameters of the construction more carefully.
To prove the theorem, we will provide a reduction from the problem of Exact Cover by 3-Sets (X3C)
which is NP-Complete [GJ02], to the decision version of k-means.
Definition 11 (X3C). Given a set U containing exactly 3m elements and a collection S =
{S1 , . . . , Sl } of subsets of U such that each Si contains exactly three elements, does there exist
m elements in S such that their union is U ?
We will show how to translate each instance of X3C, (U, S), to an instance of k-means clustering in
the Euclidean plane, X. In particular, X has a grid-like structure consisting of l rows (one for each
Si ) and roughly 6m columns (corresponding to U ) which are embedded in the Euclidean plane. The
special geometry of the embedding makes sure that any low-cost k-means clustering of the points
(where k is roughly 6ml) exhibits a certain structure. In particular, any low-cost k-means clustering
could cluster each row in only two ways; One of these corresponds to Si being included in the cover,
while the other means it should be excluded. We will then show that U has a cover of size m if and
only if X has a clustering of cost less than a specific value L. Furthermore, our?
choice of embedding
makes sure that the optimal clustering satisfies the ?-margin property for ? = 3.4 ? 1.84.
4.1.2
Reduction design
Given an instance of X3C, that is the elements U = {1, . . . , 3m} and the collection S, we construct
a set of points X in the Euclidean plane which we want to cluster. Particularly, X consists of
a set of points Hl,m in a grid-like manner, and the sets Zi corresponding to Si . In other words,
X = Hl,m ? (?l?1
i=1 Zi ).
The set Hl,m is as described in Fig. 1. The row Ri is composed of 6m + 3 points
{si , ri,1 , . . . , ri,6m+1 , fi }. Row Gi is composed of 3m points {gi,1 , . . . , gi,3m }. The distances
between the points are also shown in Fig. 1. Also, all these points have weight w, simply meaning
that each point is actually a set of w points on the same location.
Each set Zi is constructed based on Si . In particular, Zi = ?j?[3m] Bi,j , where Bi,j is a subset of
0
{xi,j , x0i,j , yi,j , yi,j
} and is constructed as follows: xi,j ? Bi,j iff j 6? Si , and x0i,j ? Bi,j iff j ? Si .
0
0
Similarly, yi,j ? Bi,j iff j 6? Si+1 , and yi,j
? Bi,j iff j ? Si+1 . Furthermore, xi,j , x0i,j , yi,j and yi,j
0
are specific locations as depicted in Fig. 2. In other words, exactly one of the locations xi,j and xi,j ,
0
and one of yi,j and yi,j
will be occupied. We set the following parameters.
?
?
1
2
h = 5, d = 6, = 2 , ? = ? h, k = (l ? 1)3m + l(3m + 2)
w
3
d
1
L1 = (6m + 3)wl, L2 = 3m(l ? 1)w, L = L1 + L2 ? m?, ? = ?
w 2w3
Lemma 12. The set X = Hl,n ? Z has a k-clustering of cost less or equal to L if and only if there
is an exact cover for the X3C instance.
Lemma
? 13. Any k-clustering of X = Hl,n ? Z with cost ? L has the ?-margin property where
? = 3.4. Furthermore, k = ?(n ).
The proofs are provided in Appendix C. Lemmas 12 and
? 13 together show that X has a k-clustering
of cost ? L satisfying the ?-margin property (for ? = 3.4) if and only if there is an exact cover by
3-sets for the X3C instance. This completes the proof of our main result (Thm. 10).
4.2
Lower Bound on the Number of Queries
In the previous
?section we showed that k-means clustering is NP-hard even under ?-margin assumption (for ? < 3.4 ? 1.84). On the other hand, in Section 3 we showed that this is not the case if the
algorithm has access to an oracle. In this section, we show a lower bound on the number of queries
needed to provide a polynomial-time algorithm for k-means clustering under margin assumption.
7
ri,2j?1
?
ri,2j
2
?
?
d
2
?
?
?
...
?
?
...
?
?
...
?
4
?
G1
R2
?
?
?
Gl?1
Rl
?
?
?
?
?
...
?
?
...
?
h
?
d?
2
?
?
?
?
x0i,j
?
0
yi,j
?
?
gi,j
yi,j
?
?
h2 ? 1
xi,j
R1
ri,2j+1
1
?
Figure 1: Geometry of Hl,m . This figure is similar to Fig. 1 in [Vat09]. Reading from letf to
right, each row Ri consists of a diamond (si ),
6m + 1 bullets (ri,1 , . . . , ri,6m+1 ), and another
diamond (fi ). Each rows Gi consists of 3m circles (gi,1 , . . . , gi,3m ).
?
?
?
ri+1,2j?1
ri+1,2j
ri+1,2j+1
Figure 2: The locations of xi,j , x0i,j , yi,j and
0
yi,j
in the set Zi . Note that the point gi,j is not
vertically aligned with xi,j or ri,2j . This figure is
adapted from [Vat09].
?
Theorem 14. For any ? ? 3.4, finding the optimal solution to the k-means objective function is
NP-Hard even when the optimal clustering satisfies the ?-margin property and the algorithm can ask
O(log k + log |X |) same-cluster queries.
Proof. Proof by contradiction: assume that there is polynomial-time algorithm A that makes
O(log k + log |X |) same-cluster queries to the oracle. Then, we show there exists another algorithm A0 for the same problem that is still polynomial but uses no queries. However, this will be a
contradiction to Theorem 10, which will prove the result.
In order to prove that such A0 exists, we use a ?simulation? technique. Note that A makes only
q < ?(log k + log |X |) binary queries, where ? is a constant. The oracle therefore can respond to
these queries in maximum 2q < k ? |X |? different ways. Now the algorithm A0 can try to simulate all
of k ? |X |? possible responses by the oracle and output the solution with minimum k-means clustering
cost. Therefore, A0 runs in polynomial-time and is equivalent to A.
5
Conclusions and Future Directions
In this work we introduced a framework for semi-supervised active clustering (SSAC) with samecluster queries. Those queries can be viewed as a natural way for a clustering mechanism to gain
domain knowledge, without which clustering is an under-defined task. The focus of our analysis was
the computational and query complexity of such SSAC problems, when the input data set satisfies a
clusterability condition ? the ?-margin property.
Our main result shows that access to a limited number of such query answers (logarithmic in the
size of the data set and quadratic in the number of ?
clusters) allows efficient successful clustering
under conditions (margin parameter between 1 and 3.4 ? 1.84) that render the problem NP-hard
without the help of such a query mechanism. We also provided a lower bound indicating that at least
?(log kn) queries are needed to make those NP hard problems feasibly solvable.
With practical applications of clustering in mind, a natural extension of our model is to allow the
oracle (i.e., the domain expert) to refrain from answering a certain fraction of the queries, or to make
a certain number of errors in its answers. It would be interesting to analyze how the performance
guarantees of SSAC algorithms behave as a function of such abstentions and error rates. Interestingly,
we can modify our algorithm to handle a sub-logarithmic number of abstentions by chekcing all
possible orcale answers to them (i.e., similar to the ?simulation? trick in the proof of Thm. 14).
8
Acknowledgments
We would like to thank Samira Samadi and Vinayak Pathak for helpful discussions on the topics of
this paper.
References
[ABD15]
Hassan Ashtiani and Shai Ben-David. Representation learning for clustering: A statistical framework. In Uncertainty in AI (UAI), 2015.
Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturba[ABS12]
tion stability. Information Processing Letters, 112(1):49?54, 2012.
[BB08]
Maria-Florina Balcan and Avrim Blum. Clustering with interactive feedback. In
Algorithmic Learning Theory, pages 316?328. Springer, 2008.
[BBM02] Sugato Basu, Arindam Banerjee, and Raymond Mooney. Semi-supervised clustering
by seeding. In In Proceedings of 19th International Conference on Machine Learning
(ICML-2002, 2002.
[BBM04] Sugato Basu, Mikhail Bilenko, and Raymond J Mooney. A probabilistic framework for
semi-supervised clustering. In Proceedings of the tenth ACM SIGKDD international
conference on Knowledge discovery and data mining, pages 59?68. ACM, 2004.
[BDR14] Shalev Ben-David and Lev Reyzin. Data stability in clustering: A closer look. Theoretical Computer Science, 558:51?61, 2014.
Shai Ben-David. Computational feasibility of clustering under clusterability assumptions.
[Ben15]
CoRR, abs/1501.00437, 2015.
[BL12]
Maria Florina Balcan and Yingyu Liang. Clustering under perturbation resilience. In
Automata, Languages, and Programming, pages 63?74. Springer, 2012.
[Das08]
Sanjoy Dasgupta. The hardness of k-means clustering. Department of Computer Science
and Engineering, University of California, San Diego, 2008.
[GJ02]
Michael R Garey and David S Johnson. Computers and intractability, volume 29. wh
freeman New York, 2002.
[KBDM09] Brian Kulis, Sugato Basu, Inderjit Dhillon, and Raymond Mooney. Semi-supervised
graph clustering: a kernel approach. Machine learning, 74(1):1?22, 2009.
[MNV09] Meena Mahajan, Prajakta Nimbhorkar, and Kasturi Varadarajan. The planar k-means
problem is np-hard. In WALCOM: Algorithms and Computation, pages 274?285.
Springer, 2009.
Andrea Vattani. The hardness of k-means clustering in the plane. Manuscript, accessible
[Vat09]
at http://cseweb. ucsd. edu/avattani/papers/kmeans_hardness. pdf, 617, 2009.
9
| 6449 |@word trial:1 kulis:1 version:2 polynomial:11 stronger:1 simulation:2 sheffet:1 asks:4 mention:1 reduction:2 contains:1 ours:1 interestingly:1 sugato:3 current:1 si:20 must:1 seeding:1 aside:1 selected:2 plane:5 location:4 simpler:1 constructed:2 prove:7 consists:3 combine:2 inside:1 yingyu:1 manner:1 introduce:2 pairwise:1 hardness:10 andrea:1 roughly:2 freeman:1 bilenko:1 das08:2 cp0:4 solver:3 increasing:1 becomes:5 spain:1 provided:5 mass:4 minimizes:1 finding:3 guarantee:1 every:4 tackle:1 interactive:4 exactly:3 partitioning:1 omit:1 arguably:1 before:3 service:1 positive:1 vertically:1 modify:2 resilience:1 engineering:1 lev:1 meet:1 subscript:1 merge:2 studied:1 challenging:1 relaxing:1 limited:1 bi:6 practical:1 acknowledgment:1 union:2 empirical:2 maxx:1 significantly:1 word:4 varadarajan:1 get:3 close:1 put:1 context:1 equivalent:1 deterministic:1 center:28 automaton:1 survey:2 contradiction:2 kushagra:1 his:1 embedding:2 handle:1 notion:15 stability:2 target:7 pt:1 construction:2 user:5 exact:4 programming:1 diego:1 us:3 trick:1 element:9 satisfying:2 particularly:2 approximated:1 located:1 pigeonhole:1 worst:1 trade:2 removed:1 sbi:2 principled:1 complexity:17 solving:4 kasturi:1 efficiency:1 learner:1 query:78 tell:1 shalev:1 larger:1 supplementary:3 say:5 otherwise:3 ability:1 gi:8 g1:1 think:1 online:1 advantage:1 propose:3 product:1 aligned:1 combining:1 organizing:1 reyzin:2 translate:1 ontario:1 iff:4 intuitive:1 cluster:54 requirement:1 zp:4 r1:1 ben:7 help:5 x0i:5 school:1 dividing:1 implies:4 direction:2 radius:2 centered:1 enable:1 hassan:2 material:3 everything:1 require:1 shopping:1 clustered:1 fix:1 alleviate:1 brian:1 extension:1 hold:1 proximity:4 considered:4 mapping:2 algorithmic:2 vary:1 waterloo:2 wl:1 successfully:1 awasthi:2 rather:3 ck:4 avoid:1 occupied:1 corollary:1 focus:3 maria:2 indicates:1 contrast:1 sigkdd:1 helpful:1 a0:4 mitigating:1 issue:1 arg:2 special:1 equal:2 aware:1 construct:2 having:2 look:1 icml:1 purchase:1 future:1 np:20 report:1 feasibly:1 few:1 oriented:1 randomly:1 composed:2 replaced:2 intended:1 phase:10 consisting:1 geometry:2 ab:1 mining:1 deferred:1 predefined:1 tuple:1 closer:1 partial:1 conforms:5 unless:1 tree:2 euclidean:17 desired:1 circle:1 theoretical:1 instance:27 column:1 asking:3 cover:5 vinayak:1 assignment:8 cost:8 subset:6 uniform:1 successful:1 johnson:1 samira:1 optimally:1 kn:5 answer:11 chooses:1 cited:1 international:2 randomized:1 accessible:1 probabilistic:4 off:2 michael:1 together:2 opposed:1 containing:1 hoeffding:1 expert:7 vattani:1 suggesting:1 satisfy:2 depends:1 later:1 try:4 tion:1 analyze:2 doing:1 recover:4 sort:2 shai:4 contribution:2 efficiently:4 yield:1 ashtiani:4 weak:2 finer:1 mooney:3 explain:1 minj:1 definition:5 garey:1 proof:15 recovers:4 gain:1 popular:1 ask:3 wh:1 bpp:2 knowledge:6 hilbert:1 cj:1 carefully:1 actually:1 nimbhorkar:1 manuscript:1 supervised:8 planar:1 response:1 formulation:2 furthermore:3 just:1 until:1 hand:2 replacing:1 banerjee:1 bullet:1 effect:1 k22:1 true:3 hence:2 read:1 excluded:1 dhillon:1 mahajan:1 oc:3 generalized:1 pdf:1 complete:2 cp:13 l1:2 balcan:5 meaning:1 novel:2 fi:2 arindam:1 common:3 rl:1 overview:1 volume:1 belong:4 discussed:1 monthly:1 ai:1 rd:2 grid:2 similarly:2 language:1 access:9 supervision:11 add:1 closest:2 showed:2 belongs:1 scenario:2 certain:4 inequality:2 binary:3 refrain:1 yi:12 abstention:2 minimum:1 employed:1 semi:7 ii:1 full:1 ing:1 technical:1 sphere:2 feasibility:2 variant:1 florina:2 metric:2 iteration:4 sometimes:1 kernel:1 c1:4 proposal:1 want:1 addressed:1 diagram:1 median:1 completes:1 sure:2 induced:1 member:2 extracting:1 noting:1 split:2 easy:1 enough:1 niceness:6 zi:5 w3:1 impediment:1 idea:1 whether:5 clusterability:5 render:1 york:1 generally:1 clear:1 extensively:1 http:1 sl:1 exist:1 dasgupta:1 express:1 group:2 threshold:1 blum:3 time1:1 tenth:1 graph:1 fraction:1 run:6 letter:1 uncertainty:1 respond:1 extends:1 family:3 reasonable:2 draw:1 decision:2 appendix:7 bound:10 completing:1 quadratic:1 oracle:24 adapted:1 occur:1 constraint:1 precisely:1 bp:1 x2:5 ri:20 simulate:1 min:1 performing:2 relatively:1 department:1 according:2 cheriton:1 belonging:1 bb08:3 making:4 s1:2 hl:6 intuitively:1 restricted:1 computationally:4 previously:1 remains:1 turn:1 discus:2 mechanism:2 needed:4 mind:3 know:3 end:1 unusual:1 apply:1 z1p:1 original:1 samadi:1 clustering:118 include:1 exploit:2 objective:2 question:1 concentration:1 exhibit:2 distance:3 link:4 thank:1 topic:1 assuming:1 index:2 relationship:2 demonstration:1 liang:2 setup:2 mostly:1 negative:1 stated:1 design:1 zt:2 perform:1 diamond:2 walcom:1 enabling:1 behave:1 precise:1 rn:1 perturbation:1 ucsd:1 sharp:1 arbitrary:2 thm:4 canada:1 david:9 introduced:3 namely:2 required:1 specified:1 california:1 conflicting:1 barcelona:1 nip:1 address:2 able:1 usually:2 below:1 appeared:1 reading:1 max:2 critical:2 pathak:1 natural:6 c1i:1 solvable:4 improve:1 raymond:3 nice:1 literature:1 l2:2 discovery:1 embedded:1 interesting:3 querying:1 h2:1 sufficient:1 consistent:1 principle:1 intractability:1 row:6 pranjal:1 friendliness:1 repeat:1 gl:1 side:1 allow:2 basu:3 mikhail:1 benefit:1 overcome:1 feedback:2 dimension:1 transition:1 collection:2 san:1 far:1 pruning:1 approximate:2 ml:1 active:3 uai:1 assumed:2 xi:8 thep:1 don:1 search:3 ca:1 correspondp:1 interact:1 alg:1 necessarily:1 domain:11 main:4 uwaterloo:1 whole:2 allowed:1 convey:1 x1:5 fig:4 referred:2 sub:1 theme:1 lie:1 answering:1 theorem:11 specific:2 showing:2 r2:1 admits:2 burden:1 exists:6 false:1 avrim:2 corr:1 ci:16 dissimilarity:1 margin:31 kx:1 easier:2 cx:7 depicted:1 logarithmic:2 simply:1 likely:2 inderjit:1 springer:3 corresponds:3 satisfies:14 relies:1 acm:2 goal:1 sorted:1 viewed:1 absence:1 feasible:1 hard:21 included:1 determined:2 uniformly:2 lemma:8 called:2 sanjoy:1 succeeds:2 indicating:1 quest:1 ongoing:1 evaluate:1 phenomenon:1 |
6,024 | 645 | Using hippocampal 'place cells' for
navigation, exploiting phase coding
Neil Burgess, John O'Keefe and Michael Recce
Department of Anatomy, University College London,
London WC1E 6BT, England.
(e-mail: n.burgess<Ducl.ac . uk)
Abstract
A model of the hippocampus as a central element in rat navigation is presented. Simulations show both the behaviour of single
cells and the resultant navigation of the rat. These are compared
with single unit recordings and behavioural data. The firing of
CAl place cells is simulated as the (artificial) rat moves in an environment. This is the input for a neuronal network whose output,
at each theta (0) cycle, is the next direction of travel for the rat.
Cells are characterised by the number of spikes fired and the time
of firing with respect to hippocampal 0 rhythm. 'Learning' occurs
in 'on-off' synapses that are switched on by simultaneous pre- and
post-synaptic activity. The simulated rat navigates successfully to
goals encountered one or more times during exploration in open
fields. One minute of random exploration of a 1m 2 environment
allows navigation to a newly-presented goal from novel starting positions. A limited number of obstacles can be successfully avoided.
1
Background
Experiments have shown the hippocampus to be crucial to the spatial memory and
navigational ability of the rat (O'Keefe & Nadel, 1978). Single unit recordings in
freely moving rats have revealed 'place cells' in fields CA3 and CAl of the hippocampus whose firing is restricted to small portions of the rat's environment (the
corresponding 'place fields') (O'Keefe & Dostrovsky, 1971), see Fig. 1a. In addition cells have been found in the dorsal pre-subiculum whose primary behavioural
929
930
Burgess, O'Keefe, and Reece
a
b
A
I
II
? IIII
II
360?
Phase
B
1
Theta
[mV)
?1
Time [s]
Figure 1: a) A typical CAl place field, max. rate (over 18) is 13.6 spikes/so b) One
second of the EEG () rhythm is shown in C, as the rat runs through a place field.
A shows the times of firing of the place cell. Vertical ticks immediately above and
below the EEG mark the positive to negative zero-crossings of the EEG, which we
define as 00 (or 360 0 ) of phase. B shows the phase of () at which each spike was
fired (O'Keefe & Recce, 1992).
correlate is 'head-direction' (Taube et aI., 1990). Both are suggestive of navigation.
Temporal as well as spatial aspects of the electrophysiology of the hippocampal
region are significant for a model. The hippocampal EEG '() rhythm' is best characterised as a sinusoid of frequency 7 - 12H z and occurs whenever the rat is making
displacement movements. Recently place cell firing has been found to have a systematic phase relationship to the local EEG (O'Keefe & Recce, 1992), see ?3.1 and
Fig. lb. Finally, the () rhythm has been found to modulate long-term potentiation
of synapses in the hippocampus (Pavlides et al., 1988).
2
Introduction
We are designing a model that is consistent with both the data from single unit
recording and the behavioural data that are relevant to spatial memory and navigation in the rat. As a first step this paper examines a simple navigational strategy
that could be implemented in a physiologically plausible way to enable navigation
to previously encountered reward sites from novel starting positions. We assume
the firing properties of CAl place cells, which form the input for our system.
The simplest map-based strategies (as opposed to route-following ones) are based
on defining a surface over the whole environment, on which gradient ascent leads to
the goal (e.g. delayed reinforcement or temporal difference learning). These tend
to have the problem that, to build up this surface, the goal must be reached many
times, from different points in the environment (by which time the rat has died of
old age). Further, a new surface must be computed if the goal is moved. Specific
problems are raised by the properties of rats' navigation: (i) the position of CAl
place fields is independent of goal position (Speakman & O'Keefe, 1990); (ii) high
firing rates in place cells are restricted to limited portions of the environment; (iii)
rats are able to navigate after a brief exploration of the environment, and (iv) can
take novel short-cuts or detours (Tolman, 1948) .
Using hippocampal 'place cells' for navigation, exploiting phase coding
To overcome these problems we propose that a more diffuse representation of position is rapidly built up downstream of CAl, by cells with larger firing fields than in
CAL The patterns of activation of this group of cells, at two different locations in
the environment, have a correlation that decreases with the separation of the two
locations (but never reaches zero, as is the case with small place fields). Thus the
overlap between t.he pattern of activity at any moment and the pattern of activity
at the goal location would be a measure of nearness to the goal. We refer to these
cells as 'subicular' cells because the subiculum seems a likely site for them, given
single unit recordings (Barnes et al., 1990) showing spatially consistent firing over
large parts of the environment.
We show that the output of these subicular cells is sufficient to enable navigation
in our model. In addition the model requires: (i) 'goal' cells (see Fig. 4a) that
fire when a goal is encountered, allowing synaptic connections from subicular cells
to be switched on, (ii) phase-coded place cell firing, (iii) 'head-direction' cells, and
(iv) synaptic change that is modulated by the phase of the EEG. The relative
firing rates of groups of goal cells code for the direction of objects encountered
during exploration, in the same way that cens in primate motor cortex code for the
direction of arm movements (Georgopoulos et al., 1988).
3
The model
In our simulation a rat is in constant motion (speed 30cm/ s) in a square environment
of size L x L (L ~ 150cm). Food or obstacles can be placed in the environment
at any time. The rat is aware of any objects within 6cm (whisker length) of its
position. It bounces off any obstacles (or the edge of the environment) with which
it collides. The f) frequency is taken to be 10Hz (period O.ls) and we model each
f) cycle as having 5 different phases. Thus the smallest timestep (at which synaptic
connections and cell firing rates are updated) is 0.02s. The rat is either 'exploring'
(its current direction is a random variable within 30 0 of its previous direction), or
'searching' (its current direction is determined by the goal cells, see below). Synaptic
and cell update rules are the same during searching or exploring.
3.1
The phase of CAl place cell firing
When a rat on a linear track runs through a place field, the place cell fires at
successively earlier phases of the EEG f) rhythm . A cell that fires at phase 360 0
when the rat enters the place field may fire as much as 355 0 earlier in the f) cycle
when exiting the field (O'Keefe & Recce, 1992), see Fig. lb.
Simulations below involve 484 CAl place cells with place field centres spread evenly
on a grid over the whole environment. The place fields are circular, with diameters
0.25L, 0.35L or Oo4L (as place fields appear to scale with the size of an environment;
Muller & Kubie, 1987). The fraction of cells active during any O.ls interval is thus
7r(0.125 2 + 0.175 2 + 0.2 2 )/3 = 9%. When the rat is in a cell's place field it fires 1 to
3 spikes depending on its distance from the field centre, see Fig. 2b.
When the (simulated) rat first enters a place field the cell fires 1 spike at phase
360 0 of the f) rhythm; as the rat moves through the place field, its phase of firing
shifts backwards by 72 0 every time the number of spikes fired by the cell changes
931
932
Burgess, O'Keefe, and Reece
c
a
EJlElm ? ?
360? 288? 216?
0.0
0.2
0.4
0.6
144? 72?
0.8
Figure 2: a) Firing rate map of a typical place cell in the model (max. rate 11.6
spikes/s); b) Model of a place field; the numbers indicate the number of spikes fired
by the place cell when the rat is in each ring. c) The phase at which spikes would
be fired during all possible straight trajectories of the rat through the place field
from left to right. d) The total number of spikes fired in the model of CAl versus
time, the phase of firing of one place cell (as the rat runs through the centre of the
field) is indicated be vertical ticks above the graph.
(i.e. each time it crosses a line in Fig. 2b). Thus each theta cycle is divided into
5 timesteps. No shift results from passing through the edge of the field, whereas a
shift of 288 0 (0.08s) results from passing through the middle of the field, see Fig.
2c. The consequences for the model in terms of which place cells fire at different
phases within one () cycle are shown in Fig. 3. The cells that are active at phase
360 0 have place fields centred ahead of the position of the rat (i.e. place fields that
the rat is entering), those active at phase 0 0 have place fields centred behind the
rat. If the rat is simultaneously leaving field A and entering field B then cell A fires
before cell B, having shifted backwards by up to 0.08s. The total number of spikes
fired at each phase as the rat moves about implies that the envelope of all the spikes
fired in CAl oscillates with the () frequency. Fig. 2d shows the shift in the firing of
one cell compared to the envelope (cf. Fig. 1b).
3.2
Subicular cells
We simulate 6 groups of 80 cells (480 in total); each subicular cell receives one
synaptic connection from a random 5% of the CAl cells. These connections are
either on or off (1 or 0). At each timestep (0.02s) the 10 cells in each group with
the greatest excitatory input from CAl fire between 1 and 5 spikes (depending on
their relative excitation). Fig. 3c shows a typical subicular firing rate map. The
consequences of phase coding in CAl (Figs. 3a and b) remain in these subicular
cells as they are driven by CAl: the net firing field of all cells active at phase 360 0
of () is peaked ahead of the rat.
Using hippocampal 'place cells' for navigation, exploiting phase coding
a
b
Figure 3: Net firing rate map of all the place cells that were active at the 360 0 ( a)
and 72 0 (b) phases of e as the rat ran through the centre of the environment from
left to right. c) Firing rate map of a typical 'subicular' cell in the model; max. rate
(over LOs) is 46.4 spikes/so Barnes et al. (1990) found max. firing rates (over O.ls)
of 80 spikes/s (mean 7 spikes/s) in the subiculum.
N SEW
a
o0 0 0
Goal cells
b
Subicular cells
<?....?....
q q 9 6x80 (480)
onloff synapses
5% connectivity
Place cells
000000000000 22x22 (484)
Figure 4: a) Connections and units in the model; interneurons shown between the
subicular cells indicate competitive dynamics, but are not simulated explicitly. b)
The trajectory of 90 seconds of 'exploration' in the central 126 x 126cm 2 of the
environment. The rat is shown in the bottom left hand corner, to scale.
3.2.1
Learning
The connections are initialised such that each subicular cell receives on average
one 'on' connection. Subsequently a synaptic connection can be switched on only
during phases 180 0 to 360 0 of e. A synapse becomes switched on if the pre-synaptic
cell is active, and the post-synaptic cell is above a threshold activity (4 spikes), in
the same timestep (0.02s). Hence a subicular firing field is rapidly built up during
exploration, as a superposition of CAl place fields, see Fig 3c.
3.3
Goal cells
The correlation between the patterns of activity of the subicular cells at two different locations in the environment decreases with the separation of the two locations.
Thus if synaptic connections to a goal cell were switched on when the rat encountered food then a firing rate map of the goal cell would resemble a cone covering
the entire environment, peaked at the food site, i.e. the firing rate would indicate
933
934
Burgess, O'Keefe, and Reece
a_
c
Figure 5: Goal cell firing fields, a) West, b) East, of 'food' encountered at the centre
of the environment. c) Trajectories to a goal from 8 novel starting positions. All
figures refer to encountering food immediately after the exploration in Fig. 4b.
Notice that much of the environment was never visited during exploration.
the closeness of the food during subsequent movement of the rat. The scheme we
actually use involves groups of goal cells continuously estimating the distance to 4
points displaced from the goal site in 4 different directions.
Notice that when a freely moving rat encounters an interesting object a fair amount
of 'local investigation' takes place (sniffing, rearing, looking around and local exploration). During the local investigation of a small object the rat crosses the location
of the object in many different directions. We postulate groups of goal cells that
become excited strongly enough to induce synaptic change in connections from
subicular cells whenever the rat encounters a specific piece of food and is heading in
a particular direction. This supposes the joint action of an object classifier and of
head-direction cells; head-direction cells corresponding to different directions being
connected to different goal cells. Since synaptic change occurs only at the 180 0 to
360 0 phases of 8, and the net firing rate map of all the subicular cells that are active
at phase 360 0 during any 8 cycle is peaked ahead of the rat, goal cells have firing
fields that are peaked a little bit away from the goal position. For example, goal
cells whose subicular connections are changed when the rat is heading east have
firing rate fields that are peaked to the east of the goal location, see Fig. 5.
Local investigation of a food site is modelled by the rat moving 12cm to the north,
south, east and west and occurs whenever food is encountered. Navigation is restricted to the central 126 x 126cm 2 portion of the 150 x 150cm2 environment (over
which firing rate maps are shown) to leave room for this. There are 4 goal cells
for every piece of food found in the environment, (GC-Ilorth, GC.-South, GC_east,
GC_west), see Fig. 4a. Initially the connections from all subicular cells are off; they
are switched on if the subicular cell is active and the rat is at the particular piece of
food, travelling in the right direction. When the rat is searching, goal cells simply
fire a number of spikes (in each 0.025 timestep) that is proportional to their net
excitatory input from the subicular cells.
3.4
Maps and navigation
When the rat is to the north of the food, GC-Ilorth fires at a higher rate than
GC..south. We take the firing rate of GC_north to be a 'vote' that the rat is north
Using hippocampal 'place cells' for navigation, exploiting phase coding
a
b
c
",.,
;
:'
..
--_ .. _..
,
--- ......
~*
f ~~
: +
-- ".
Figure 6: a) Trajectory of rat with alternating goals. b) an obstacle is interposed;
the rat collides with the obstacle on the first run, but learns to avoid the collision site
in the 2 subsequent runs. c) Successive predictions of goal (box) and obstacle (cross)
positions generated as the rat ran from one goal site to the other; the predicted
positions get more accurate as the rat gets closer to the object in question.
of the goal. Similarly the firing rate of GCJlouth is a vote that the rat is south
of the goal: the resultant direction (the vector sum of directions north, south, east
and west, weighted by the firing rates of the corresponding cells) is an estimate
of the direction of the rat from the food (cf.Georgopoulos et al., 1988). Since the
firing rate maps of the 4 goal cells are peaked quite close to the food location, their
net firing rate increases as the food is approached, i.e. it is an estimation of how
close the food is. Thus the firing rates of the 4 goal cells associated with a piece of
food can be used to predict its approximate position relative to the rat (e.g. 70cm
northeast), as the rat moves about the environment (see Fig. 6c).
We use groups of goal cells to code for the locations at which the rat encountered
any objects (obstacles or food), as described above. A new group of goal cells is
recruited every time the rat encounters a new object, or a new (6cm) part of an
extended object. The output of the system acts as a map for the rat, telling it
where everything is relative to itself, as it moves around. The process of navigation
is to decide which way to go, given the information in the map. When there are
no obstacles in the environment, navigation corresponds to moving in the direction
indicated by the group of goal cells corresponding to a particular piece of food.
When the environment includes many obstacles the task of navigation is much
harder, and there is not enough clear behavioural data to guide modelling.
We do not model navigation at a neuronal level, although we wish to examine the
navigation that would result from a simple reading of the 'map' provided by our
model. The rules used to direct the simulated rat are as follows: (i) every 0.18 the
direction and distance to the goal (one of the pieces of food) are estimated; (ii)
the direction and distance to all locations at which an obstacle was encountered
are estimated; (iii) obstacle locations are classified as 'in-the-way' if (a) estimated
to be within 45? of the goal direction, (b) closer than the goal and (c) less than
L/2 away; (iv) the current direction of the rat becomes the vector sum of the goal
direction (weighted by the net firing rate of the corresponding 4 goal cells) minus
the directions to any in-the-way obstacles (weighted by the net firing rate of the
'0 bstacle cells' and by the similarity of the obstacle and goal directions).
935
936
Burgess, O'Keefe, and Reece
4
Performance
The model achieves latent learning (i.e. the map is constructed independently of
knowledge of the goal, see e.g. Tolman, 1948). A piece of food encountered only
once, after exploration, can be returned to, see Fig. 5c. Notice that a large part
of the environment was never visited during exploration (Fig. 4b). Navigation is
equally good after exploration in an environment containing food/obstacles from the
beginning. If the food is encountered only during the earliest stages of exploration
(before a stable subicular representation is built up) then performance is worse.
Multiple goals and a small number of obstacles can be accommodated, see Fig. 6.
Notice that searching also acts as exploration, and that synaptic connections can
be switched at any time: all learning is incremental, but saturates when all the
relevant synapses have been switched on. Performance does not depend crucially
on the parameter values, used although it is worse with fewer cells, and smaller
environments require less exploration before reliable navigation is possible (e.g. 60s
for a 1m 2 box). Quantitative analysis will appear in a longer paper.
References
Barnes C A, McNaughton B L, Mizumori S J Y, Leonard B W & Lin L-H (1990) 'Comparison of spatial and temporal characteristics of neuronal activity in sequential stages of
hippocampal processing', Progreu in Brain Re6earch 83 287-300.
Georgopoulos A P, Kettner R E & Schwartz A B (1988) 'Primate motor cortex and free
arm movements to visual targets in three-dimensional space. II. Coding of the direction
of movement by a neuronal population', J. Neur06d. 8 2928-2937.
Muller R U & Kubie J L (1987) 'The effects of changes in the environment on the spatial
firing of hippocampal complex-spike cells', J. Neur06d. 7 1951-1968.
O'Keefe J & Dostrovsky J (1971) 'The hippocampus as a spatial map: preliminary evidence
from unit activity in the freely moving rat', BrainRe6. 34 171-175.
O'Keefe J & Nadel L (1978) The hippocampu6 a6 a cognitive map, Clarendon Press, Oxford.
O'Keefe J & Reece M (1992) 'Phase relationship between hippocampal place units and
the EEG theta rhythm', Hippocampu!, to be published.
Pavlides C, Greenstein Y J, Grudman M & Winson J (1988) 'Long-term potentiation in
the dentate gyrus is induced preferentially on the positive phase of O-rhythm', Brain Re6.
439 383-387.
Speakman A S & O'Keefe J (1990) 'Hippocampal complex spike cells do not change their
place fields if the goal is moved within a cue controlled environment', European Journal
of Neuro!cience 2 544-555.
Taube J S, Muller R U & Ranck J B Jr (1990) 'Head-direction cells recorded from the
postsubiculum in freely moving rats. I. Description & quantitative analysis', J. Neur06ci.
10 420-435.
Tolman E C (1948) 'Cognitive Maps in rats and men', P6ychological Review 55 189-208.
| 645 |@word middle:1 hippocampus:5 seems:1 open:1 cm2:1 simulation:3 crucially:1 excited:1 minus:1 harder:1 moment:1 rearing:1 ranck:1 current:3 activation:1 must:2 john:1 subsequent:2 motor:2 update:1 cue:1 fewer:1 beginning:1 short:1 nearness:1 location:11 successive:1 constructed:1 direct:1 become:1 examine:1 brain:2 food:23 little:1 becomes:2 provided:1 estimating:1 cens:1 cm:8 temporal:3 quantitative:2 every:4 act:2 oscillates:1 classifier:1 uk:1 schwartz:1 unit:7 appear:2 positive:2 before:3 local:5 died:1 consequence:2 oxford:1 firing:39 limited:2 speakman:2 kubie:2 displacement:1 pre:3 induce:1 get:2 close:2 cal:16 map:17 go:1 starting:3 l:3 independently:1 immediately:2 examines:1 rule:2 subicular:20 population:1 searching:4 mcnaughton:1 updated:1 target:1 designing:1 element:1 crossing:1 cut:1 bottom:1 enters:2 region:1 cycle:6 connected:1 movement:5 decrease:2 ran:2 environment:30 reward:1 dynamic:1 depend:1 joint:1 reece:5 london:2 artificial:1 approached:1 mizumori:1 whose:4 quite:1 larger:1 plausible:1 ability:1 neil:1 itself:1 net:7 propose:1 relevant:2 rapidly:2 fired:8 description:1 moved:2 los:1 sew:1 exploiting:4 incremental:1 ring:1 leave:1 object:10 cience:1 depending:2 ac:1 implemented:1 predicted:1 resemble:1 indicate:3 implies:1 involves:1 direction:28 anatomy:1 subsequently:1 exploration:15 enable:2 everything:1 require:1 behaviour:1 potentiation:2 investigation:3 preliminary:1 exploring:2 around:2 predict:1 dentate:1 achieves:1 smallest:1 estimation:1 travel:1 superposition:1 visited:2 re6:1 successfully:2 weighted:3 avoid:1 tolman:3 earliest:1 modelling:1 bt:1 entire:1 initially:1 spatial:6 raised:1 field:35 aware:1 never:3 having:2 once:1 peaked:6 simultaneously:1 delayed:1 phase:30 fire:11 interneurons:1 circular:1 navigation:21 behind:1 x22:1 accurate:1 edge:2 closer:2 iv:3 old:1 detour:1 x80:1 accommodated:1 dostrovsky:2 earlier:2 obstacle:15 a6:1 ca3:1 northeast:1 supposes:1 systematic:1 off:4 michael:1 continuously:1 connectivity:1 central:3 postulate:1 successively:1 opposed:1 sniffing:1 containing:1 recorded:1 worse:2 corner:1 cognitive:2 centred:2 coding:6 north:4 includes:1 explicitly:1 mv:1 piece:7 greenstein:1 portion:3 reached:1 competitive:1 square:1 characteristic:1 modelled:1 trajectory:4 straight:1 published:1 classified:1 simultaneous:1 synapsis:4 reach:1 whenever:3 synaptic:13 frequency:3 initialised:1 resultant:2 associated:1 newly:1 knowledge:1 actually:1 clarendon:1 higher:1 synapse:1 box:2 strongly:1 stage:2 correlation:2 hand:1 receives:2 indicated:2 effect:1 hence:1 sinusoid:1 spatially:1 alternating:1 entering:2 during:13 covering:1 rhythm:8 excitation:1 rat:61 hippocampal:11 motion:1 novel:4 recently:1 he:1 significant:1 refer:2 ai:1 grid:1 postsubiculum:1 similarly:1 centre:5 moving:6 stable:1 cortex:2 surface:3 encountering:1 similarity:1 longer:1 navigates:1 driven:1 route:1 muller:3 freely:4 taube:2 period:1 ii:6 multiple:1 england:1 cross:3 long:2 lin:1 divided:1 post:2 equally:1 coded:1 controlled:1 prediction:1 nadel:2 neuro:1 cell:89 background:1 addition:2 whereas:1 iiii:1 interval:1 leaving:1 crucial:1 envelope:2 collides:2 ascent:1 south:5 induced:1 recording:4 tend:1 hz:1 recruited:1 backwards:2 revealed:1 iii:3 enough:2 burgess:6 timesteps:1 shift:4 bounce:1 o0:1 returned:1 passing:2 action:1 collision:1 clear:1 involve:1 amount:1 simplest:1 diameter:1 gyrus:1 shifted:1 notice:4 estimated:3 track:1 group:9 threshold:1 timestep:4 graph:1 downstream:1 fraction:1 cone:1 sum:2 run:5 place:42 decide:1 separation:2 bit:1 encountered:11 barnes:3 activity:7 ahead:3 georgopoulos:3 diffuse:1 aspect:1 speed:1 simulate:1 department:1 jr:1 remain:1 smaller:1 making:1 primate:2 restricted:3 taken:1 behavioural:4 previously:1 travelling:1 away:2 encounter:3 cf:2 wc1e:1 build:1 move:5 question:1 spike:20 occurs:4 strategy:2 primary:1 gradient:1 distance:4 simulated:5 evenly:1 mail:1 code:3 length:1 relationship:2 preferentially:1 negative:1 allowing:1 vertical:2 displaced:1 defining:1 extended:1 looking:1 head:5 saturates:1 gc:4 lb:2 exiting:1 connection:13 able:1 subiculum:3 below:3 pattern:4 reading:1 navigational:2 built:3 max:4 memory:2 reliable:1 greatest:1 overlap:1 arm:2 recce:4 scheme:1 theta:4 brief:1 review:1 relative:4 whisker:1 interesting:1 men:1 proportional:1 versus:1 age:1 switched:8 sufficient:1 consistent:2 excitatory:2 changed:1 placed:1 free:1 heading:2 interposed:1 tick:2 guide:1 telling:1 overcome:1 reinforcement:1 avoided:1 correlate:1 approximate:1 suggestive:1 active:8 a_:1 physiologically:1 latent:1 kettner:1 eeg:8 complex:2 european:1 spread:1 whole:2 fair:1 neuronal:4 fig:20 site:7 west:3 position:12 wish:1 re6earch:1 learns:1 minute:1 specific:2 navigate:1 showing:1 closeness:1 evidence:1 sequential:1 keefe:15 electrophysiology:1 simply:1 likely:1 visual:1 corresponds:1 modulate:1 goal:47 leonard:1 room:1 change:6 characterised:2 typical:4 determined:1 total:3 vote:2 east:5 college:1 mark:1 modulated:1 dorsal:1 |
6,025 | 6,450 | Hardness of Online Sleeping Combinatorial
Optimization Problems
Satyen Kale? ?
Yahoo Research
satyen@satyenkale.com
Chansoo Lee?
Univ. of Michigan, Ann Arbor
chansool@umich.edu
D?avid P?al
Yahoo Research
dpal@yahoo-inc.com
Abstract
We show that several online combinatorial optimization problems that admit efficient no-regret algorithms become computationally hard in the sleeping setting
where a subset of actions becomes unavailable in each round. Specifically, we
show that the sleeping versions of these problems are at least as hard as PAC learning DNF expressions, a long standing open problem. We show hardness for the
sleeping versions of O NLINE S HORTEST PATHS, O NLINE M INIMUM S PANNING
T REE, O NLINE k-S UBSETS, O NLINE k-T RUNCATED P ERMUTATIONS, O NLINE
M INIMUM C UT, and O NLINE B IPARTITE M ATCHING. The hardness result for
the sleeping version of the Online Shortest Paths problem resolves an open problem presented at COLT 2015 [Koolen et al., 2015].
1
Introduction
Online learning is a sequential decision-making problem where learner repeatedly chooses an action
in response to adversarially chosen losses for the available actions. The goal of the learner is to
minimize the regret, defined as the difference between the total loss of the algorithm and the loss of
the best fixed action in hindsight. In online combinatorial optimization, the actions are subsets of
a ground set of elements (also called components) with some combinatorial structure. The loss of
an action is the sum of the losses of its elements. A particular well-studied instance is the O NLINE
S HORTEST PATH problem [Takimoto and Warmuth, 2003] on a graph, in which the actions are the
paths between two fixed vertices and the elements are the edges.
We study a sleeping variant of online combinatorial optimization where the adversary not only
chooses losses but availability of the elements every round. The unavailable elements are called
sleeping or sabotaged. In O NLINE S ABOTAGED S HORTEST PATH problem, for example, the adversary specifies unavailable edges every round, and consequently the learner cannot choose any
path using those edges. A straightforward application of the sleeping experts algorithm proposed
by Freund et al. [1997] gives a no-regret learner, but it takes exponential time (in the input graph
size) every round. The design of a computationally efficient no-regret algorithm for O NLINE S AB OTAGED S HORTEST PATH problem was presented as an open problem at COLT 2015 by Koolen
et al. [2015].
In this paper, we resolve this open problem and prove that O NLINE S ABOTAGED S HORTEST PATH
problem is computationally hard. Specifically, we show that a polynomial-time low-regret algorithm
for this problem implies a polynomial-time algorithm for PAC learning DNF expressions, which is
a long-standing open problem. The best known algorithm for PAC learning DNF expressions on n
e 1/3
variables has time complexity 2O(n ) [Klivans and Servedio, 2001].
?
?
Current affiliation: Google Research.
This work was done while the authors were at Yahoo Research.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Our reduction framework (Section 4) in fact shows a general result that any online sleeping combinatorial optimization problem with two simple structural properties is as hard as PAC learning
DNF expressions. Leveraging this result, we obtain hardness results for the sleeping variant of wellstudied online combinatorial optimization problems for which a polynomial-time no-regret algorithm exists: O NLINE M INIMUM S PANNING T REE, O NLINE k-S UBSETS, O NLINE k-T RUNCATED
P ERMUTATIONS, O NLINE M INIMUM C UT, and O NLINE B IPARTITE M ATCHING (Section 5).
Our hardness result applies to the worst-case adversary as well as a stochastic adversary, who draws
an i.i.d. sample every round from a fixed (but unknown to the learner) joint distribution over availabilities and losses. This implies that no-regret algorithms would require even stronger restrictions
on the adversary.
1.1
Related Work
Online Combinatorial Optimization. The standard problem of online linear optimization
? with
d actions (Experts setting) admits algorithms with O(d) running time per round and O( T log d)
regret after T rounds [Littlestone and Warmuth, 1994, Freund and Schapire, 1997], which is minimax optimal [Cesa-Bianchi and Lugosi, 2006, Chapter 2]. A naive application of such algorithms
to online combinatorial optimization problem (precise definitions to be given momentarily)
over a
?
ground set of d elements will result in exp(O(d)) running time per round and O( T d) regret.
Despite this, many online combinatorial optimization problems, such as the ones
? considered in this
paper, admit algorithms with3 poly(d) running time per round and O(poly(d) T ) regret [Takimoto
and Warmuth, 2003, Kalai and Vempala, 2005, Koolen et al., 2010, Audibert et al., 2013]. In fact,
Kalai and Vempala [2005] shows that the existence of a polynomial-time algorithm for an offline
combinatorial problem implies the existence of an algorithm for the?corresponding online optimization problem with the same per-round running time and O(poly(d) T ) regret.
Online Sleeping Optimization. In studying online sleeping optimization, three different notions
of regret have been used: (a) policy regret, (b) ranking regret, and (c) per-action regret, in decreasing
order of computational hardness to achieve no-regret. Policy regret is the total difference between
the loss of the algorithm and the loss of the best policy, which maps a set of available actions and
the observed loss sequence to an available action [Neu and Valko, 2014]. Ranking regret is the
total difference between the loss of the algorithm and the loss of the best ranking of actions, which
corresponds to a policy that chooses in each round the highest-ranked available action [Kleinberg
et al., 2010, Kanade and Steinke, 2014, Kanade et al., 2009]. Per-action regret is the difference
between the loss of the algorithm and the loss of an action, summed over only the rounds in which
the action is available [Freund et al., 1997, Koolen et al., 2015]. Note that policy regret upper bounds
ranking regret, and while ranking regret and per-action regret are generally incomparable, per-action
regret is usually the smallest of the three notions.
The sleeping Experts (also known as Specialists) setting has been extensively studied in the literature
[Freund et al., 1997, Kanade and Steinke, 2014]. In this paper we focus on the more general online
sleeping combinatorial optimization problem, and in particular, the per-action notion of regret.
A summary of known results for online sleeping optimization problems is given in Figure 1. Note
in particular that an efficient algorithm was known for minimizing per-action regret in the sleeping
Experts problem [Freund et al., 1997]. We show in this paper that a similar efficient algorithm for
minimizing per-action regret in online sleeping combinatorial optimization problems cannot exist,
unless there is an efficient algorithm for learning DNFs. Our reduction technique is closely related to
that of Kanade and Steinke [2014], who reduced agnostic learning of disjunctions to ranking regret
minimization in the sleeping Experts setting.
2
Preliminaries
An instance of online combinatorial optimization is defined by a ground set U of d elements, and
a decision set D of actions, each of which is a subset of U . In each round t, the online learner is
required to choose an action Vt ? D, while simultaneously an adversary chooses a loss function
3
In this paper, we use the poly(?) notation to indicate a polynomially bounded function of the arguments.
2
Regret notion
Policy
Bound
Upper
Sleeping Experts
?
O( T log d),
under
[Kanade et al., 2009]
ILA
Lower
Ranking
Per-action
Lower
Upper
?(poly(d)T 1?? ), under SLA
[Kanade and Steinke, 2014]
?
O( T log d), adversarial setting
[Freund et al., 1997]
Sleeping Combinatorial Opt.
?
O(poly(d) T ), under ILA
[Neu and Valko, 2014, AbbasiYadkori et al., 2013]
?(poly(d)T 1?? ), under SLA
[Abbasi-Yadkori et al., 2013]
?
?(exp(?(d)) T ), under SLA
[Easy construction, omitted]
?(poly(d)T 1?? ), under SLA
[This paper]
Lower
Figure 1:
Summary of known results. Stochastic Losses and Availabilities (SLA) assumption is where
adversary chooses a joint distribution over loss and availability before the first round, and takes an i.i.d. sample
every round. Independent Losses and Availabilities (ILA) assumption is where adversary chooses losses and
availabilities independently of each other (one of the two may be adversarially chosen; the other one is then
chosen i.i.d in each round). Policy regret upper bounds ranking regret which in turn upper bounds per-action
regret for the problems of interest; hence some bounds shown in some cells of the table carry over to other
cells by implication and are not shown for clarity. The lower bound on ranking regret in online sleeping
combinatorial optimization is unconditional and holds for any algorithm, efficient or not. All other lower
bounds are computational, i.e. for polynomial time algorithms, assuming intractability of certain well-studied
learning problems, such as learning DNFs or learning noisy parities.
`t : U ? [?1, 1]. The loss of any V ? D is given by (with some abuse of notation)
P
`t (V ) :=
e?V `t (e).
The learner suffers loss `t (Vt ) and obtains `t as feedback. The regret of the learner with respect to
an action V ? D is defined to be
PT
RegretT (V ) :=
t=1 `t (Vt ) ? `t (V ).
We say that an online optimization algorithm has a regret bound of f (d, T ) if RegretT (V ) ? f (d, T )
for all V ? D. We say that the algorithm has no regret if f (d, T ) = poly(d)T 1?? for some
? ? (0, 1), and it is computationally efficient if it has a per-round running time of order poly(d, T ).
We now define an instance of the online sleeping combinatorial optimization. In this setting, at the
start of each round t, the adversary selects a set of sleeping elements St ? U and reveals it to the
learner. Define At = {V ? D | V ? St = ?}, the set of awake actions at round t; the remaining
actions in D, called sleeping actions, are unavailable to the learner for that round. If At is empty,
i.e., there are no awake actions, then the learner is not required to do anything for that round and the
round is discarded from computation of the regret.
For the rest of the paper, unless noted otherwise, we use per-action regret as our performance measure. Per-action regret with respect to V ? D is defined as:
X
RegretT (V ) :=
`t (Vt ) ? `t (V ).
(1)
t: V ?At
In other words, our notion of regret considers only the rounds in which V is awake.
For clarity, we define an online combinatorial optimization problem as a family of instances of online
combinatorial optimization (and correspondingly for online sleeping combinatorial optimization).
For example, O NLINE S HORTEST PATH problem is the family of all instances of all graphs with
designated source and sink vertices, where the decision set D is a set of paths from the source to
sink, and the elements are edges of the graph.
Our main result is that many natural online sleeping combinatorial optimization problems are unlikely to admit a computationally efficient no-regret algorithm, although their non-sleeping versions
(i.e., At = D for all t) do. More precisely, we show that these online sleeping combinatorial optimization problems are at least as hard as PAC learning DNF expressions, a long-standing open
problem.
3
3
Online Agnostic Learning of Disjunctions
Instead of directly reducing PAC learning DNF expressions to no-regret learning for online sleeping combinatorial optimization problems, we use an intermediate problem, online agnostic learning
of disjunctions. By a standard online-to-batch conversion argument [Kanade and Steinke, 2014],
online agnostic learning of disjunctions is at least as hard as agnostic improper PAC-learning of disjunctions [Kearns et al., 1994], which in turn is at least as hard as PAC-learning of DNF expressions
[Kalai et al., 2012]. The online-to-batch conversion argument allows us to assume the stochastic
adversary (i.i.d. input sequence) for online agnostic learning of disjunctions, which in turn implies
that our reduction applies to online sleeping combinatorial optimization with a stochastic adversary.
Online agnostic learning of disjunctions is a repeated game between the adversary and a learning
algorithm. Let n denote the number of variables in the disjunction. In each round t, the adversary
chooses a vector xt ? {0, 1}n , the algorithm predicts a label ybt ? {0, 1} and then the adversary
reveals the correct label yt ? {0, 1}. If ybt 6= yt , we say that algorithm makes an error.
For any predictor ? : {0, 1}n ? {0, 1}, we define the regret with respect to ? after T rounds as
PT
RegretT (?) = t=1 1[b
yt 6= yt ] ? 1[?(xt ) 6= yt ].
Our goal is to design an algorithm that is competitive with any disjunction, i.e. for any disjunction
? over n variables, the regret is bounded by poly(n) ? T 1?? for some ? ? (0, 1). Recall that a
disjunction over n variables is a boolean function ? : {0, 1}n ? {0, 1} that on an input x =
(x(1), x(2), . . . , x(n)) outputs
!
!
_
_
x(i)
?(x) =
x(i) ?
i?P
i?N
where P and N are disjoint subsets of {1, 2, . . . , n}. We allow either P or N to be empty, and the
empty disjunction is interpreted as the constant 0 function. For any index i ? {1, 2, . . . , n}, we call
it a relevant index for ? if i ? P ? N and irrelevant index for ? otherwise. For any relevant index i,
we call it positive if i ? P and negative if i ? N .
4
General Hardness Result
In this section, we identify two combinatorial properties of online sleeping combinatorial optimization problems that are computationally hard.
Definition 1. Let n be a positive integer. Consider an instance of online sleeping combinatorial
optimization where the ground set U has d elements with 3n + 2 ? d ? poly(n). This instance
is called a hard instance with parameter n, if there exists a subset Us ? U of size 3n + 2 and a
bijection between Us and the set (i.e., labeling of elements in Us by the set)
n
[
{(i, 0), (i, 1), (i, ?)} ? {0, 1},
i=1
such that the decision set D satisfies the following properties:
1. (Heaviness) Any action V ? D has at least n + 1 elements in Us .
2. (Richness) For all (s1 , . . . , sn+1 )
?
{0, 1, ?}n ? {0, 1},
{(1, s1 ), (2, s2 ), . . . , (n, sn ), sn+1 } ? Us is in D.
the
action
We now show how to use the above definition of hard instances to prove the hardness of an online
sleeping combinatorial optimization (OSCO) problem by reducing from the online agnostic learning
of disjunction (OALD) problem. At a high level, the reduction works as follows. Given an instance
of the OALD problem, we construct a specific instance of the the OSCO and a sequence of losses
and availabilities based on the input to the OALD problem. This reduction has the property that
for any disjunction, there is a special set of actions of size n + 1 such that (a) exactly one action
is available in any round and (b) the loss of this action exactly equals the loss of the disjunction on
the current input example. Furthermore, the action chosen by the OSCO can be converted into a
prediction in the OALD problem with only lesser or equal loss. These two facts imply that the regret
of the OALD algorithm is at most n + 1 times the per-action regret of the OSCO algorithm.
4
Algorithm 1 A LGORITHM A LG DISJ FOR LEARNING DISJUNCTIONS
Require: An algorithm Algosco for the online sleeping combinatorial optimization problem, and the
input size n for the disjunction learning problem.
1: Construct a hard instance (U, D) with parameter n of the online sleeping combinatorial optimization problem, and run Algosco on it.
2: for t = 1, 2, . . . , T do
3:
Receive xt ? {0, 1}n .
4:
Set the set of sleeping elements for Algosco to be St = {(i, 1 ? xt (i)) | i = 1, 2, . . . , n}.
5:
Obtain an action Vt ? D by running Algosco such that Vt ? St = ?.
6:
Set ybt = 1[0 ?
/ Vt ].
7:
Predict ybt , and receive true label yt .
8:
In algorithm Algosco , set the loss of the awake elements e ? U \ St as follows:
(
1?yt
if e 6= 0
`t (e) = n+1 n(1?yt )
yt ? n+1
if e = 0.
9: end for
Theorem 1. Consider an online sleeping combinatorial optimization problem such that for any
positive integer n, there is a hard instance with parameter n of the problem. Suppose there is an
algorithm Algosco that for any instance of the problem with ground set U of size d, runs in time
poly(T, d) and has regret bounded by poly(d) ? T 1?? for some ? ? (0, 1). Then, there exists an
algorithm Algdisj for online agnostic learning of disjunctions over n variables with running time
poly(T, n) and regret poly(n) ? T 1?? .
Proof. Algdisj is given in Algorithm 1. First, we note that in each round t, we have
`t (Vt ) ? 1[yt 6= ybt ].
(2)
We prove this separately for two different cases; in both cases, the inequality follows from the
heaviness property, i.e., the fact that |Vt | ? n + 1.
1. If 0 ?
/ Vt , then the prediction of Algdisj is ybt = 1, and thus
`t (Vt ) = |Vt | ?
1 ? yt
? 1 ? yt = 1[yt 6= ybt ].
n+1
2. If 0 ? Vt , then the prediction of Algdisj is ybt = 0, and thus
1 ? yt
n(1 ? yt )
`t (Vt ) = (|Vt | ? 1) ?
+ yt ?
? yt = 1[yt 6= ybt ].
n+1
n+1
Note that if Vt satisfies the equality |Vt | = n + 1, then we have an equality `t (Vt ) = 1[yt 6= ybt ]; this
property will be useful later.
Next, let ? be an arbitrary disjunction, and let i1 < i2 < ? ? ? < im be its relevant indices sorted
in increasing order. Define f? : {1, 2, . . . , m} ? {0, 1} as f? (j) := 1[ij is a positive index for ?],
and define the set of elements W? := {(i, ?) | i is an irrelevant index for ?}. Finally, let D? =
{V?1 , V?2 , . . . , V?m+1 } be the set of m + 1 actions where for j = 1, 2, . . . , m, we define
V?j := {(i` , 1 ? f? (`)) | 1 ? ` < j} ? {(ij , f? (j))} ? {(i` , ?) | j < ` ? m} ? W? ? {1},
and
V?m+1 := {(i` , 1 ? f? (`)) | 1 ? ` ? m} ? W? ? {0}.
The actions in D? are indeed in the decision set D due to the richness property.
We claim that D? contains exactly one awake action in every round and the awake action contains
the element 1 if and only if ?(xt ) = 1. First, we prove uniqueness: if V?j and V?k (where j < k)
are both awake in the same round, then (ij , f? (j)) ? V?j and (ij , 1 ? f? (j)) ? V?k are both awake
elements, contradicting our choice of St . To prove the rest of the claim, we consider two cases:
5
1. If ?(xt ) = 1, then there is at least one j ? {1, 2, . . . , m} such that xt (ij ) = f? (j). Let j 0
0
0
be the smallest such j. Then, by construction, the set V?j is awake at time t, and 1 ? V?j ,
as required.
2. If ?(xt ) = 0, then for all j ? {1, 2, . . . , m} we must have xt (ij ) = 1 ? f? (j). Then, by
construction, the set V?m+1 is awake at time t, and 0 ? V?m+1 , as required.
Since every action in D? has exactly n + 1 elements, and if V is awake action in D? at time t, we
just showed that 1 ? V if and only if ?(xt ) = 1, exactly the same argument as in the beginning of
this proof implies that
`t (V ) = 1[yt 6= ?(xt )].
(3)
Furthermore, since exactly one action in D? is awake every round, we have
T
X
1[yt 6= ?(xt )] =
t=1
X
X
`t (V ).
(4)
V ?D? t: V ?At
Finally, we can bound the regret of algorithm Algdisj (denoted Regretdisj
T ) in terms of the regret of
algorithm Algosco (denoted Regretosco
)
as
follows:
T
Regretdisj
T (?) =
T
X
1[b
yt 6= yt ] ? 1[?(xt ) 6= yt ] ?
t=1
=
X
X
X
`t (Vt ) ? `t (V )
V ?D? t: V ?At
Regretosco
(V ) ? |D? | ? poly(d) ? T 1?? = poly(n) ? T 1?? ,
T
V ?D?
The first inequality follows by (2) and (4), and the last equation since |D? | ? n + 1 and d ?
poly(n).
4.1
Hardness results for Policy Regret and Ranking Regret
It is easy to see that our technique for proving hardness easily extends to ranking regret (and therefore, policy regret). The reduction simply uses any algorithm for minimizing ranking regret in
Algorithm 1 as Algosco . This is because in the proof of Theorem 1, the set D? has the property that
exactly one action Vt ? D? is awake in any round t, and `t (Vt ) = 1[yt 6= ybt ]. Thus, if we consider
a ranking where the actions in D? are ranked at the top positions (in arbitrary order), the loss of this
ranking exactly equals the number of errors made by the disjunction ? on the input sequence. The
same arguments as in the proof of Theorem 1 then imply that the regret of Algdisj is bounded by that
of Algosco , implying the hardness result.
5
Hard Instances for Specific Problems
Now we apply Theorem 1 to prove that many online sleeping combinatorial optimization problems
are as hard as PAC learning DNF expressions by constructing hard instances for them. Note that all
these problems admit efficient no-regret algorithms in the non-sleeping setting.
5.1
Online Shortest Path Problem
In the O NLINE S HORTEST PATH problem, the learner is given a directed graph G = (V, E) and
designated source and sink vertices s and t. The ground set is the set of edges, i.e. U = E,
and the decision set D is the set of all paths from s to t. The sleeping version of this problem
has been called the O NLINE S ABOTAGED S HORTEST PATH problem by Koolen et al. [2015], who
posed the open question of whether it admits an efficient no-regret algorithm. For any n ? N, a
hard instance is the graph G(n)
Snshown in Figure 2. It has 3n + 2 edges that are labeled by the
elements of ground set U = i=1 {(i, 0), (i, 1), (i, ?)} ? {0, 1}, as required. Now note that any
s-t path in this graph has length exactly n + 1, so D satisfies the heaviness property. Furthermore,
the richness property is clearly satisfied, since for any s ? {0, 1, ?}n ? {0, 1}, the set of edges
{(1, s1 ), (2, s2 ), . . . , (n, sn ), sn+1 } is an s-t path and therefore in D.
6
(1, 1)
(1, ?)
s
(2, 1)
(n, 1)
(2, ?)
(n, ?)
v1
(1, 0)
v2
vn?1
(2, 0)
1
vn
(n, 0)
t
0
Figure 2: Graph G(n) .
(1, 1)
v1,1
(2, 1)
v2,1
(n, 1)
vn,1
1
u1
(1, ?)
(1, 0)
v1,?
v1,0
u2
(2, ?)
(2, 0)
v2,?
un
(n, ?)
(n, 0)
v2,0
vn,?
un+1
0
1
0
vn,0
Figure 3: Graph P (n) . This is a complete bipartite graph as described in the text, but only the
special labeled edges shown for clarity.
5.2
Online Minimum Spanning Tree Problem
In the O NLINE M INIMUM S PANNING T REE problem, the learner is given a fixed graph G = (V, E).
The ground set here is the set of edges, i.e. U = E, and the decision set D is the set of spanning
trees in the graph. For any n ? N, a hard instance is the same graph G(n) shown in Figure 2, except
that the edges are undirected. Note that the spanning trees in G(n) are exactly the paths from s to
t. The hardness of this problem immediately follows from the hardness of the O NLINE S HORTEST
PATHS problem.
5.3
Online k-Subsets Problem
In the O NLINE k-S UBSETS problem, the learner is given a fixed ground set of elements U . The
decision set D is the set of subsets of U of size k. For any n ? N, we construct a hard instance with
parameter n of the O NLINE k-S UBSETS problem with k = n + 1 and d = 3n + 2. The set D of all
subsets of size k = n + 1 of a ground set U of size d = 3n + 2 clearly satisfies both the heaviness
and richness properties.
5.4
Online k-Truncated Permutations Problem
In the O NLINE k- TRUNCATED P ERMUTATIONS problem (also called the O NLINE k- RANKING
problem), the learner is given a complete bipartite graph with k nodes on one side and m ? k nodes
on the other, and the ground set U is the set of all edges; thus d = km. The decision set D is the
set of all maximal matchings, which can be interpreted as truncated permutations of k out of m objects. For any n ? N, we construct a hard instance with parameter n of the O NLINE k-T RUNCATED
P ERMUTATIONS problem with k = n + 1, m = 3n + 2 and d = km = (n + 1)(3n + 2). Let
L = {u1 , u2 , . . . , un+1 } be the nodes on the left side of the bipartite graph, and since m = 3n + 2,
let R = {vi,0 , vi,1 , vi,? | i = 1, 2, . . . , n} ? {v0 , v1 } denote the nodes on the right side of the
graph. The ground set U consists of all d = km = (n + 1)(3n + 2) edges joining nodes in L to
nodes in R. We now specify the special 3n + 2 elements of the ground set U : for i = 1, 2, . . . , n,
label the edges (ui , vi,0 ), (ui , vi,1 ), (ui , vi,? ) by (i, 0), (i, 1), (i, ?) respectively. Finally, label the
edges (un+1 , v0 ), (un+1 , v1 ) by 0 and 1 respectively. The resulting bipartite graph P (n) is shown in
Figure 3, where only the special labeled edges are shown for clarity.
Now note that any maximal matching in this graph has exactly n+1 edges, so the heaviness condition
is satisfied. Furthermore, the richness property is satisfied, since for any s ? {0, 1, ?}n ? {0, 1}, the
set of edges {(1, s1 ), (2, s2 ), . . . , (n, sn ), sn+1 } is a maximal matching and therefore in D.
7
(1, 1)
(2, 1)
(n, 1)
(1, ?)
(2, ?)
(n, ?)
u1
v1
(1, 0)
u2
un
v2
1
vn
vn+1
un+1
(n, 0)
(2, 0)
0
Figure 4: Graph M (n) for the O NLINE B IPARTITE M ATCHING problem.
u1
(1, ?)
v1
u2
(2, ?)
v2
(1, 1)
(1, 0)
(2, 1)
s
(2, 0)
(n, 0)
(n, 1)
t
(n, ?)
1
un
vn
0
w
Figure 5: Graph C (n) for the O NLINE M INIMUM C UT problem.
5.5
Online Bipartite Matching Problem
In the O NLINE B IPARTITE M ATCHING problem, the learner is given a fixed bipartite graph
G = (V, E). The ground set here is the set of edges, i.e. U = E, and the decision set D is
the set of maximal matchings in G. For any n ? N, a hard instance with parameter n is the
graph M (n)
Snshown in Figure 4. It has 3n + 2 edges that are labeled by the elements of ground
set U = i=1 {(i, 0), (i, 1), (i, ?)} ? {0, 1}, as required. Now note that any maximal matching in this graph has size exactly n + 1, so D satisfies the heaviness property. Furthermore,
the richness property is clearly satisfied, since for any s ? {0, 1, ?}n ? {0, 1}, the set of edges
{(1, s1 ), (2, s2 ), . . . , (n, sn ), sn+1 } is a maximal matching and therefore in D.
5.6
Online Minimum Cut Problem
In the O NLINE M INIMUM C UT problem the learner is given a fixed graph G = (V, E) with a
designated pair of vertices s and t. The ground set here is the set of edges, i.e. U = E, and the
decision set D is the set of cuts separating s and t: a cut here is a set of edges that when removed from
(n)
the graph disconnects s from t. For any n ? N, a hard instance is the graph
Sn C shown in Figure 5.
It has 3n + 2 edges that are labeled by the elements of ground set U = i=1 {(i, 0), (i, 1), (i, ?)} ?
{0, 1}, as required. Now note that any cut in this graph has size at least n + 1, so D satisfies
the heaviness property. Furthermore, the richness property is clearly satisfied, since for any s ?
{0, 1, ?}n ? {0, 1}, the set of edges {(1, s1 ), (2, s2 ), . . . , (n, sn ), sn+1 } is a cut and therefore in D.
6
Conclusion
In this paper we showed that obtaining an efficient no-regret algorithm for sleeping versions of several natural online combinatorial optimization problems is as hard as efficiently PAC learning DNF
expressions, a long-standing open problem. Our reduction technique requires only very modest conditions for hard instances of the problem of interest, and in fact is considerably more flexible than
the specific form presented in this paper. We believe that almost any natural combinatorial optimization problem that includes instances with exponentially many solutions will be a hard problem in
its online sleeping variant. Furthermore, our hardness result is via stochastic i.i.d. availabilities and
losses, a rather benign form of adversary. This suggests that obtaining sublinear per-action regret
is perhaps a rather hard objective, and suggests that to obtain efficient algorithms we might need to
either (a) make suitable simplifications of the regret criterion or (b) restrict the adversary?s power.
8
References
Yasin Abbasi-Yadkori, Peter L. Bartlett, Varun Kanade, Yevgeny Seldin, and Csaba Szepesv?ari.
Online learning in markov decision processes with adversarially chosen transition probability
distributions. In Advances in Neural Information Processing Systems (NIPS), pages 2508?2516,
2013.
Jean-Yves Audibert, Bubeck S?ebastien, and G?abor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31?45, 2013.
Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning and Games. Cambridge University
Press, New York, NY, 2006.
Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
Yoav Freund, Robert E. Schapire, Yoram Singer, and Warmuth K. Manfred. Using and combining
predictors that specialize. In Proceedings of the 29th Annual ACM symposium on Theory of
Computing, pages 334?343. ACM, 1997.
Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of
Computer and System Sciences, 71(3):291?307, 2005.
Adam Tauman Kalai, Varun Kanade, and Yishay Mansour. Reliable agnostic learning. Journal of
Computer and System Sciences, 78(5):1481?1495, 2012.
Varun Kanade and Thomas Steinke. Learning hurdles for sleeping experts. ACM Transactions on
Computation Theory (TOCT), 6(3):11, 2014.
Varun Kanade, H. Brendan McMahan, and Brent Bryan. Sleeping experts and bandits with stochastic
action availability and adversarial rewards. In Proceedings of the 12th International Conference
on Artificial Intelligence and Statistics (AISTATS), pages 272?279, 2009.
Michael J. Kearns, Robert E. Schapire, and Linda M. Sellie. Toward efficient agnostic learning.
Machine Learning, 17(2?3):115?141, 1994.
Robert Kleinberg, Alexandru Niculescu-Mizil, and Yogeshwer Sharma. Regret bounds for sleeping
experts and bandits. Machine learning, 80(2-3):245?272, 2010.
?
1/3
Adam R. Klivans and Rocco Servedio. Learning DNF in Time 2O(n ) . In Proceedings of the 33rd
Annual ACM Symposium on Theory of Computing (STOC), pages 258?265. ACM, 2001.
Wouter M. Koolen, Manfred K. Warmuth, and Jyrki Kivinen. Hedging structured concepts. In
Adam Tauman Kalai and Mehryar Mohri, editors, Proceedings of the 23th Conference on Learning Theory (COLT), pages 93?105, 2010.
Wouter M. Koolen, Manfred K. Warmuth, and Dmitry Adamskiy. Open problem: Online sabotaged
shortest path. In Proceedings of the 28th Conference on Learning Theory (COLT), 2015.
Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and
computation, 108(2):212?261, 1994.
Gergely Neu and Michal Valko. Online combinatorial optimization with stochastic decision sets
and adversarial losses. In Advances in Neural Information Processing Systems, pages 2780?2788,
2014.
Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. The Journal of
Machine Learning Research, 4:773?818, 2003.
9
| 6450 |@word version:6 polynomial:5 stronger:1 open:9 km:3 carry:1 reduction:7 contains:2 current:2 com:2 michal:1 must:1 benign:1 update:1 implying:1 intelligence:1 warmuth:8 beginning:1 manfred:5 boosting:1 bijection:1 node:6 become:1 symposium:2 prove:6 consists:1 specialize:1 indeed:1 hardness:14 yasin:1 decreasing:1 resolve:2 increasing:1 becomes:1 spain:1 notation:2 bounded:4 agnostic:11 linda:1 interpreted:2 hindsight:1 csaba:1 every:8 exactly:12 before:1 positive:4 despite:1 joining:1 path:20 ree:3 abuse:1 lugosi:3 might:1 studied:3 suggests:2 directed:1 regret:63 ybt:11 matching:5 word:1 ila:3 cannot:2 restriction:1 map:1 yt:25 kale:1 straightforward:1 independently:1 immediately:1 proving:1 notion:5 construction:3 pt:2 suppose:1 yishay:1 us:1 element:23 cut:5 predicts:1 labeled:5 observed:1 worst:1 momentarily:1 improper:1 richness:7 highest:1 removed:1 inimum:7 complexity:1 ui:3 reward:1 bipartite:6 learner:17 sink:3 matchings:2 easily:1 joint:2 chapter:1 univ:1 dnf:10 artificial:1 labeling:1 disjunction:20 jean:1 yogeshwer:1 posed:1 say:3 otherwise:2 satyen:2 statistic:1 noisy:1 online:59 sequence:4 maximal:6 relevant:3 combining:1 achieve:1 empty:3 adam:4 object:1 ij:6 implies:5 indicate:1 closely:1 correct:1 alexandru:1 stochastic:7 require:2 generalization:1 preliminary:1 opt:1 im:1 hold:1 considered:1 ground:17 exp:2 predict:1 claim:2 smallest:2 omitted:1 uniqueness:1 combinatorial:36 label:5 weighted:1 minimization:1 clearly:4 rather:2 kalai:6 focus:1 adversarial:3 brendan:1 niculescu:1 unlikely:1 abor:2 bandit:2 selects:1 i1:1 colt:4 flexible:1 denoted:2 yahoo:4 summed:1 special:4 equal:3 construct:4 santosh:1 adversarially:3 simultaneously:1 ab:1 interest:2 wouter:2 wellstudied:1 unconditional:1 implication:1 heaviness:7 edge:24 modest:1 unless:2 tree:3 littlestone:2 atching:4 instance:24 boolean:1 yoav:2 vertex:4 subset:8 predictor:2 chansoo:1 considerably:1 chooses:7 st:6 international:1 standing:4 lee:1 michael:1 gergely:1 abbasi:2 cesa:2 satisfied:5 choose:2 admit:4 brent:1 expert:9 converted:1 availability:9 disconnect:1 inc:1 includes:1 audibert:2 ranking:15 vi:6 hedging:1 later:1 multiplicative:1 start:1 competitive:1 minimize:1 yves:1 who:3 efficiently:1 identify:1 suffers:1 neu:3 definition:3 servedio:2 proof:4 recall:1 ut:4 varun:4 response:1 specify:1 done:1 furthermore:7 just:1 google:1 perhaps:1 believe:1 concept:1 lgorithm:1 true:1 hence:1 equality:2 i2:1 round:31 game:2 noted:1 anything:1 criterion:1 complete:2 theoretic:1 ari:1 koolen:7 exponentially:1 cambridge:1 rd:1 mathematics:1 v0:2 showed:2 irrelevant:2 certain:1 inequality:2 affiliation:1 vt:21 minimum:2 sharma:1 shortest:3 long:4 prediction:4 variant:3 kernel:1 sleeping:45 cell:2 receive:2 szepesv:1 hurdle:1 separately:1 source:3 rest:2 undirected:1 leveraging:1 call:2 integer:2 structural:1 intermediate:1 easy:2 restrict:1 incomparable:1 avid:1 lesser:1 whether:1 expression:9 bartlett:1 panning:3 peter:1 york:1 action:51 repeatedly:1 regrett:4 generally:1 useful:1 extensively:1 eiji:1 reduced:1 schapire:4 specifies:1 exist:1 disjoint:1 per:18 bryan:1 sellie:1 sla:5 clarity:4 takimoto:3 v1:8 graph:27 sum:1 run:2 extends:1 family:2 almost:1 vn:8 draw:1 decision:15 bound:10 simplification:1 annual:2 precisely:1 awake:13 kleinberg:2 u1:4 klivans:2 argument:5 vempala:3 nline:29 structured:1 designated:3 making:1 s1:6 computationally:6 equation:1 turn:3 singer:1 dnfs:2 end:1 umich:1 studying:1 available:6 operation:1 apply:1 v2:6 specialist:1 yadkori:2 batch:2 existence:2 thomas:1 top:1 running:7 remaining:1 yoram:1 objective:1 question:1 rocco:1 separating:1 majority:1 considers:1 spanning:3 toward:1 assuming:1 abbasiyadkori:1 length:1 index:7 minimizing:3 lg:1 robert:4 stoc:1 negative:1 design:2 ebastien:1 policy:9 unknown:1 bianchi:2 upper:5 conversion:2 markov:1 discarded:1 truncated:3 dpal:1 precise:1 mansour:1 arbitrary:2 pair:1 required:7 nick:1 barcelona:1 nip:2 adversary:16 usually:1 reliable:1 power:1 suitable:1 ranked:2 natural:3 valko:3 kivinen:1 mizil:1 minimax:1 imply:2 naive:1 sn:12 text:1 literature:1 nicol:1 freund:8 loss:29 permutation:2 sublinear:1 editor:1 intractability:1 summary:2 mohri:1 parity:1 last:1 offline:1 side:3 allow:1 steinke:6 correspondingly:1 tauman:2 feedback:1 transition:1 author:1 made:1 polynomially:1 transaction:1 obtains:1 dmitry:1 adamskiy:1 reveals:2 un:8 table:1 kanade:11 obtaining:2 unavailable:4 mehryar:1 poly:19 constructing:1 aistats:1 main:1 s2:5 yevgeny:1 contradicting:1 repeated:1 ny:1 position:1 exponential:1 mcmahan:1 theorem:4 xt:13 specific:3 pac:10 admits:2 exists:3 sequential:1 michigan:1 simply:1 bubeck:1 seldin:1 u2:4 applies:2 corresponds:1 satisfies:6 acm:5 goal:2 sorted:1 jyrki:1 ann:1 consequently:1 hard:25 specifically:2 except:1 reducing:2 kearns:2 total:3 called:6 arbor:1 |
6,026 | 6,451 | Learned Region Sparsity and Diversity
Also Predict Visual Attention
Zijun Wei1? ,
Hossein Adeli2? ,
Gregory Zelinsky1,2 ,
Minh Hoai1 ,
Dimitris Samaras1
1. Department of Computer Science 2. Department of Psychology ? Stony Brook University
1.{zijwei, minhhoai, samaras}@cs.stonybrook.edu
2.{hossein.adelijelodar, gregory.zelinsky}@stonybrook.edu
*. Both authors contributed equally to this work
Abstract
Learned region sparsity has achieved state-of-the-art performance in classification
tasks by exploiting and integrating a sparse set of local information into global
decisions. The underlying mechanism resembles how people sample information
from an image with their eye movements when making similar decisions. In this
paper we incorporate the biologically plausible mechanism of Inhibition of Return
into the learned region sparsity model, thereby imposing diversity on the selected
regions. We investigate how these mechanisms of sparsity and diversity relate to
visual attention by testing our model on three different types of visual search tasks.
We report state-of-the-art results in predicting the locations of human gaze fixations,
even though our model is trained only on image-level labels without object location
annotations. Notably, the classification performance of the extended model remains
the same as the original. This work suggests a new computational perspective
on visual attention mechanisms, and shows how the inclusion of attention-based
mechanisms can improve computer vision techniques.
1
Introduction
Visual spatial attention refers to the narrowing of processing in the brain to particular objects in
particular locations so as to mediate everyday tasks. A widely used paradigm for studying visual
spatial attention is visual search, where a desired object must be located and recognized in a typically
cluttered environment. Visual search is accompanied by observable estimates?in the form of
gaze fixations?of how attention samples information from a scene while searching for a target.
Efficient visual search requires prioritizing the locations of features of the target object class over
features at locations offering less evidence for the target [31]. Computational models of visual search
typically estimate and plot goal directed prioritization of visual space as priority maps for directing
attention [32]. This form of target directed prioritization is different from the saliency modeling
literature, where bottom-up feature contrast in an image is used to predict fixation behavior during
the free-viewing of scenes [16].
The field of fixation prediction is highly active and growing [2], although it was not until fairly
recently that attention researchers have begun using the sophisticated object detection techniques
developed in the computer vision literature [8, 18, 31]. The dominant method used in the visual
search literature to generate priority maps for detection has been the exhaustive detection mechanism
[8, 18]. Using this method, an object detector is applied to an image to provide bounding boxes
that are then combined, weighted by their detection scores, to generate a priority map [8]. While
these models have had success in predicting behavior, training these detectors requires human labeled
bounding boxes, which are expensive and laborious to collect, and also prone to individual annotator
differences.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
An alternative approach to modeling visual attention is to determine how model and behavioral task
performance depends on shared core computational principles [24]. To this end, a new class of
attention-inspired models have been developed and applied to tasks ranging from image captioning
[30] to hand writing generation [13], where selective spatial attention mechanisms have been shown
to emerge [1, 25]. By requiring visual inputs to be gated in a manner similar to the human gating
of visual inputs via fixations, these models are able to localize or ?attend? selectively to the most
informative regions of an input image while ignoring irrelevant visual inputs [25, 1]. This built in
attention mechanism enables the model of [30], trained only on generating captions, to bias the
visual input so as to gate only relevant information when generating each word to describe an image.
Priority maps were then generated to show the mapping of attended image areas to generated words.
While these new models show attention-like behavior, to our knowledge none have been used to
predict actual human allocations of attention.
The current work bridges the behavioral and computer vision literatures by using a classification
model that has biologically plausible constraints to create a priority map for the purpose of predicting
the allocation of spatial attention as measured by changes in fixation. The specific image-category
classification model that we use is called Region Ranking SVM (RRSVM) [29]. This model was
developed in our recent work [29], and it achieved state-of-the-art performance on a number of
classification tasks by learning categorization with locally-pooled information from input images.
This model works by imposing sparsity on selected image areas that contribute to the classification
decision, much like how humans prioritize visual space and sample with fixations only a sparse set of
image locations while attempting to detect and recognize object categories [4]. We believe that this
analogy between sparse sampling and attention makes this model a natural candidate for predicting
attention behavior in visual search tasks. It is worth noting that this model was originally created for
object classification and not localization, hence no object localization data is used to train it, unlike
standard fixation prediction algorithms [16, 17].
There are two contributions of our work. First, we show that the RSSVM model approaches stateof-the-art in predicting the fixations made by humans searching for the same targets in the same
images. This means that a model trained solely for the purpose of image classification, without any
localization data, is also able to predict the locations of fixations that people make while searching for
the to-be-classified objects. Second, we incorporate the biologically plausible constraint of Inhibition
of Return [10], which we model by requiring a set of diverse (minimally overlapping) sparse regions
in RRSVM. Incorporating this constraint, we are able to reduce the error in fixation prediction (up
to 21%). Importantly, adding the Inhibition of Return constraint does not affect the classification
performance. By building this bridge, we hope to show how automated object detection might be
improved by the inclusion of an attention mechanism, and how a recent attention-inspired approach
from computer vision might illuminate how the brain prioritizes visual information for the efficient
direction of spatial attention.
2
Region Ranking SVM
Here we review Region Ranking SVM (RRSVM) [29]. The main problem addressed by RRSVM is
image classification, which aims to recognize the semantic category of an image, such as whether
the image contains a certain object (e.g., car, cat) or portrays a certain action (e.g., jumping, typing).
RRSVM evaluates multiple local regions of an image, and subsequently outputs the classification
decision based on a sparse set of regions. This mechanism is noteworthy and different from other
approaches that aggregate information from multiple regions indistinguishably (e.g., [23, 28, 22, 14]).
RRSVM assumes training data consisting of images {Bi }ni=1 and associated binary labels {yi }ni=1
indicating the presence or absence of the visual element (object or action) of interest. To account
for the uncertainty of each semantic region in an image, RRSVM considers multiple local regions.
The number of regions can differ between images, but for brevity, assume each image has the
same number of regions. Let m be the number of regions for each image, and d the dimension
of each region descriptor. RRSVM represents each image as a matrix Bi ? <d?m , but the order
of the columns can be arbitrary. RRSVM jointly
Pn learns a region evaluation function and a region
selection function by minimizing: ?||w||2 + i=1 (wT ?(Bi ; w)s+b?yi )2 subject to the constraints:
s1 ? s2 ? ? ? ? ? sm ? 0 and h(?(Bi ; w)s) ? 1. Here h(?) is the function that measures the spread
2
Pn
Pn
of the column vectors of a matrix: h([x1 , ? ? ? , xn ]) = i=1 xi ? n1 i=1 xi . w and b are
the weight vector and the bias term of an SVM classifier, which are the parameters of the region
2
evaluation function. ?(B; w) denotes a matrix that can be obtained by rearranging the columns of
the matrix B so that wT ?(B; w) is a sequence of non-increasing values. The vector s is the weight
vector for combining the SVM region scores for each image [15]; this vector is common to all images
of a class.
The objective of the above formulation consists of the regularization term ?||w||2 and the sum of
squared losses. This objective is based purely on classification performance. However, note that
the classification decision is based on both the region evaluation function (i.e., w, b) and the region
selection function (i.e., s), which are simultaneously learned using the above formulation. What
is interesting is that the obtained s vector is always sparse. An experiment [29] on the ImageNet
dataset [27] with 1000 classes showed that RRSVM generally uses 20 regions or less (from hundreds
of local regions considered). This intriguing fact prompted us to consider the connection between
sparse region selection and visual attention. Would machine-based discriminative localization reflect
the allocation of human attention in visual search? It turns out that there is compelling evidence for
a relationship, as will be shown in the experiment section. This relationship can be strengthened if
RRSVM is extended to incorporate Inhibition of Return in the region selection process, which will
be explained next.
3
Incorporating Inhibition of Return into Region Ranking SVM
A mechanism critical to the modeling of human visual search behavior is Inhibition of Return:
the lower probability of re-fixating on or near already attended areas, possibly mediated by lateral
inhibition [16, 20]. This mechanism, however, is not currently enforced in the formulation of
RRSVM, and indeed the spatial relationship between selected regions is not considered. RRSVM
usually selects a sparse set of regions, but the selected regions are free to overlap and concentrate on
a single image area.
Inspired by Inhibition of Return, we consider an extension of RRSVM where non-maxima suppression
is incorporated into the process of selecting regions. This mechanism will select the local maximum
for nearby activation areas (a potential fixation location) and discard the rest (non-maxima nearby
locations). The biological plausibility of non-maxima suppression has been discussed in previous
work, where it was shown to be a plausible method for allowing the stronger activations to stand out
(see [21, 7] for details).
To incorporate non-maxima suppression in the framework of RRSVM, we replaced the region ranking
procedure ?(B; w) of RRSVM by ?(Bi ; w, ?), a procedure that ranks and subsequently returns the
list of regions that do not significantly overlap with one another. In particular, we use intersection
over union to measure overlap, where ? is a threshold for tolerable overlap (we set ? = 0.5 in our
experiments). This leads to the following optimization problem:
minimize ?||w||2 +
w,s,b
n
X
(wT ?(Bi ; w, ?)s + b ? yi )2
(1)
i=1
s.t. s1 ? s2 ? ? ? ? ? sm ? 0,
h(?(Bi ; w, ?)s) ? 1.
(2)
(3)
The above formulation can be optimized in the same way as RRSVM in [29]. It will yield a classifier
that makes a decision based on a sparse and diverse set of regions. Sparsity is inherited from RRSVM,
and location diversity is attained using non-maxima suppression. Hereafter, we refer to this method
as Sparse Diverse Regions (SDR) classifier.
4
Experiments and Analysis
We present here empirical evidence showing that learned region sparsity and diversity can also predict
visual attention. We first describe the implementation details of RRSVM and SDR. We then consider
attention prediction under three conditions: (1) single-target present, that is to find the one instance of
a target category appearing in a stimulus image; (2) target absent, i.e., searching for a target category
that does not appear in the image; and (3) multiple-targets present, i.e., searching for multiple object
categories where at least one is present in the image. Experiments are performed on three datasets
POET [26], PET [11] and MIT900 [8], which are the only available datasets for object search tasks.
3
4.1
Implementation details of RRSVM and SDR
Our implementation of RRSVM and SDR is similar to [29], but we consider more local regions.
This yields a finer localization map without changing the classification performance. As in [29],
the feature extraction pipeline is based on VGG16 [28]. The last fully connected layer of VGG16
is removed and the remaining fully connected layer is converted to a fully convolutional layer. To
compute feature vectors for multiple regions of an image, the image is resized and then fed into
VGG16 to yield a feature map with 4096 channels. The size of the feature map depends on the size
of the resized image, and each feature map corresponds to a subwindow of the original image. By
resizing the original image to multiple sizes, one can compute feature vectors for multiple regions of
the original image. In this work, we consider 7 different image sizes instead of the three sizes used
by [28, 29]. The first three resized images are obtained by scaling the image isotropically so that the
smallest dimension is 256, 384, or 512. For brevity, assuming the width is smaller than the height,
this yields three images with dimensions 256 ? a, 384 ? b, and 512 ? c. We consider four other
resized images with dimensions 256 ? b, 384 ? c, 384 ? a, 512 ? b. These image sizes correspond to
local regions having an aspect ratio of either 2:3 or 3:2, while the isotropically resized images yield
square local regions. Additionally, we also consider horizontal flips of the resized images. Overall,
this process yields 700 to 1000 feature vectors, each corresponding to a local image region.
The RRSVM and SDR classifiers used in the following experiments are trained on the trainval set of
PASCAL VOC 2007 dataset [9] unless otherwise stated. This dataset is distinct from the datasets
used for evaluation. For SDR, the non-maxima suppression threshold is 0.5, and we only keep the
top ranked regions that have non-zero region scores (si ? 0.01). To generate a priority map, we first
associate each pixel with an integer indicating the total number of selected regions covering that
pixel, then apply a Gaussian blur kernel to the integer valued map, with the kernel width tuned on the
validation set.
To test whether learned region sparsity and diversity predicts human attention, we compare the
generated priority maps with the behaviorally-derived fixation density maps. To make this comparison
we use the Area Under the ROC Curve (AUC), a commonly used metric for visual search task
evaluation [6]. We use the publicly available implementation of the AUC evaluation from the MIT
saliency benchmark [5], specifically the AUC-Judd implementation for its better approximation.
4.2
Single-target present condition
We consider visual attention in the single-target present condition using the POET dataset [26].
This dataset is a subset of PASCAL VOC 2012 dataset [9], and it has 6270 images from 10 object
categories (aeroplane, boat, bike, motorbike, cat, dog, horse, cow, sofa and dining table). The task was
two-alternative forced choice for object categories, approximating visual search, and eye movement
data were collected from 5 subjects as they freely viewed these images. On average, 5.7 fixations
were made per image. The SDR classifier is trained on the trainval set of PASCAL VOC 2007 dataset,
which does not overlap with the POET dataset. We randomly selected one third of the images for
each category to compile a validation set for tuning the width of the Gaussian blur kernel for all
categories. The rest were used as test images.
For each test image, we compare the priority map generated for the selected regions by RRSVM with
the human fixation density map. The overall correlation is high, yielding a mean AUC score of 0.81
(on all images of 10 object classes). This is intriguing because RRSVM is optimized for classification
performance only; joint classification is apparently related to discriminative localization by human
attention in the context of a visual search task. By incorporating Inhibition of Return into RRSVM,
we observe even stronger correlation with human behavior, with the mean AUC score obtained by
SDR now being 0.85.
The left part of Table 1 shows AUC scores for individual categories of the POET dataset. We
compare the performance of other attention prediction baselines. All recent fixation prediction
models [8, 19, 31] apply object category detectors on the input image and combine the detection
results to create priority maps. Unfortunately, direct comparison to these models is not currently
possible due to the unavailability of needed code and datasets. However, our RCNN [12] baseline,
which is the state-of-the-art object detector on this dataset, should improve the pipelines of these
models. To account for possible localization errors and multiple object instances, we keep all the
detections with a detection score greater than a threshold. This threshold is chosen to maximize the
4
Table 1: AUC scores on POET and PET test sets
Model
POET
aero bike boat cat cow table dog horse mbike sofa mean
PET
multi-target
SDR
RCNN
CAM [34]
0.87 0.85 0.83 0.89 0.88 0.79 0.88 0.86
0.84 0.83 0.79 0.84 0.81 0.76 0.83 0.80
0.86 0.78 0.78 0.88 0.84 0.74 0.87 0.84
0.86 0.77 0.85
0.87 0.76 0.82
0.83 0.67 0.82
0.83
0.77
0.65
AnnoBoxes
0.85 0.86 0.81 0.84 0.84 0.79 0.80 0.80
0.88 0.80 0.83
0.82
Figure 1: Priority maps generated for SDR on the POET dataset. Warm colors represent high
values. Dots represents human fixations. Best viewed on a digital device.
detector?s F1 score, which is the harmonic mean between precision and recall. We also consider a
variant method where only the top detection is kept, but the result is not as good. We also consider
the recently proposed weakly-supervised object localization approach of [34], which is denoted as
CAM in Table 1. We use the released model to extract features and train a linear SVM on top of the
features. For each test image, we weigh a linear sum of local activations to create an activation map.
We normalize the activation map to get the priority map. We even compare SDR with a method that
directly uses the annotated object bounding boxes to predict human attention, which is denoted as
AnnoBoxes in the table. For this method, the priority map is created by applying a Gaussian filter to
a binary map where the center of the bounding box over the target(s) is set to 1 and everywhere else
0. Notably, the methods selected for comparison are strong models for predicting human attention.
RCNN has an unfair advantage over SDR because it has access to localized annotations in its training
data, and AnnoBoxes even assumes the availability of object bounding boxes for test data. As can be
seen from Table 1, SDR significantly outperforms the other methods. This provides strong empirical
evidence suggesting that learned region sparsity and diversity is highly predictive human attention.
Fig. 1 shows some randomly selected results from SDR on test images.
Note that the incorporation of Inhibition of Return into RRSVM and the consideration of more local
regions does not affect the classification performance. When evaluated on the PASCAL VOC 2007
test set, the RRSVM method that uses local regions corresponding to 3 image scales (as in [29]), the
RRSVM method that uses more regions with different aspect ratios (as explained in Sec. 4.1), and
the RRSVM method that incorporates the NMS mechanism (i.e., SDR), all achieve a mean AP of
92.9%. SDR, however, is significantly better than RRSVM in predicting fixations during search tasks,
increasing the mean AUC score from 0.81 to 0.85. Also note that the predictive power of SDR is
not sensitive to the value of ?: for aeroplane on the POET dataset, the AUC scores remain the same
(0.87) when ? is varied from 0.5 to 0.7.
Figure 2 shows some examples highlighting the difference between the regions selected by RRSVM
and SDR. As can be seen, incorporating non-maxima suppression encourages greater dispersion of
5
(a)
(b)
KLDiv
0.11
0.29
0.61
0.89
Figure 2: Comparison between RRSVM and SDR on the POET dataset. (a): priority maps
created by RRSVM, (b): priority maps generated by SDR. SDR better captures fixations when there
are multiple instances of the target categories. The KL Divergence scores between RRSVM and SDR
are reported in the bottom row.
(a) motorbike
(b) aeroplane
(c) diningtable
(d) cow
Figure 3: Failure cases. Representative images where the priority maps produced by SDR are
significantly different from human fixations. The caption under each image indicates the target
category. The modes of failure are: (a) failure in classification; (b) and (c) existence of a more
attractive object (text or face); (d) co-occurrence of multiple objects. Best viewed on digital devices.
the sparse areas as opposed to a more clustered distribution in RRSVM. This in turn better predicts
attention when there are multiple instances of the target object in the display.
Figure 3 shows representative cases where the priority maps produced by SDR are significantly
different from human fixations. The common failure modes are: (1) failure to locate the correct
region for correct classification (see Fig 3a); (2) particularly distracting elements in the scene, such as
text (3b) or faces (3c); (3) failure to attend to multiple instances of the target categories. Tuning SDR
using human fixation behavioral data [17] and combining SDR with multiple sources of guidance
information [8], including saliency and scene context, could mitigate some of the model limitations.
4.3
Target absent condition
To test whether SDR is able to predict people?s fixations when the search target is absent, we
performed experiments on 456 target-absent images from the MIT900 dataset [8]. Human observers
were asked to search for people in real world scenes. Eye movement data were collected from 14
searchers who made roughly 6 fixations per image, on average. We picked a random subset of 150
images to tune the Gaussian blur parameter and reported the results for the remaining 306 images.
We noticed that the sizes and poses of the people in these images were very different from those of
the training samples in VOC2007, which could have led to poor SDR classification performance. In
order to address this issue, we augmented the training set of SDR with 456 images from MIT900 that
contain people. The added training examples were a disjoint set from the target-absent images for
evaluation.
On these target absent cases, SDR achieves an AUC score of 0.78. As a reference, the method of
Ehinger et al. [8] also achieves AUC of 0.78. But the two methods are not directly comparable
because Ehinger et al. [8] used a HOG-based person detector that was trained on a much larger
dataset with location annotation.
6
Figure 4: Priority map predictions using SDR on some MIT target-absent stimuli. Warm colors
represent high probabilities. Dots indicate human fixations. Best viewed on a digital device.
(a) dog and sheep
(b) cows and sheep
(c) dog and cat
(d) cows
Figure 5: Visualization of SDR prediction on the PET dataset. Note that the high classification
accuracy ensures that more reliable regions are detected.
Figure 4 shows some randomly selected results from the test set demonstrating SDR?s success in
predicting where people attend. Interestingly, SDR looks at regions that either contain person-like
objects or are likely to contain persons (e.g., sidewalks), with the latter observation likely the result of
sidewalks co-occurring with persons in the positive training samples (a form of scene context effect).
4.4
Multiple-target attention
We considered human visual search behavior when there were multiple targets. The experiments were
performed on the PET dataset [11]. This dataset is a subset of PASCAL VOC2012 dataset [9], and it
contains 4135 images from 6 animal categories (cat, dog, bird, horse cow and sheep). Four subjects
were instructed to find all of the animals in each image. Eye movements were recorded, where each
subject made roughly 6 fixations per image. We excluded the images that contained people to avoid
ambiguity with the animal category. We also removed the images that were shared with the PASCAL
VOC 2007 dataset to ensure no overlap between training and testing data. This yielded a total of
3309 images from which a random set of 1300 images were selected for tuning the Gaussian kernel
width parameter. The remaining 2309 images were used for testing.
To model the search for multiple categories in an image, for all methods except AnnoBoxes we
applied six animal classifiers/detectors simultaneously to the test image. For each classifier/detector
of each category, a threshold was selected to achieve the highest F1 score on the validation data. The
prediction results are shown in the right part of Tab. 1. SDR significantly outperforms other methods.
Notably, CAM performs poorly on this dataset, due perhaps to the low classification accuracy of that
model (83% mAP on VOC 2007 test set as opposed to 93% of SDR). Some randomly selected results
are shown in Fig. 5.
4.5
Center Bias
For the POET dataset, some of the target objects are quite iconic and in the center of the image.
For these cases, a simple center bias map might be a good predictor of the fixations. To test this,
we generated priority maps by setting the center of the image to 1 and everywhere else 0, and then
applying a Gaussian filter with sigma tuned on the validation set. This simple Center Bias (CB) map
achieved an AUC score of 0.84, which is even higher than some of the methods presented in Tab. 1.
This prompted us to analyze whether the good performance of SDR is simply due to center bias.
An intuitive way to address the CB problem would be to use Shuffled AUC (sAUC) [33]. However,
sAUC favors true positives over false negatives and gives more credit to off-center information [3],
which may lead to biased results. This is especially true when the datasets are center-biased. The
sAUC scores for RCNN, AnnoBox, CAM, SDR, and Inter-Observer [3] are 0.61, 0.61, 0.65, 0.64,
and 0.70, respectively. SDR outperforms AnnoBox and RCNN by 3% and is on par with CAM. Also
7
(a)
(b)
Figure 6: (a): Red bars: the distribution of AUC scores of SDR for which the AUC scores of Center
Bias are under 0.6. Blue bars: the distribution of AUC scores Center Bias where AUC scores of SDR
are under 0.6. (b): The box plot for the distributions of KL divergence between Center Bias and SDR
scores on each class in POET dataset. The KL divergence distribution revealed that the priority maps
created by Center Bias are significantly different from the ones created by SDR.
note that sAUC for Inter-Observer is 0.70, which suggests the existence of center bias in POET (the
sAUC score of Inter-Observer on MIT300 [17] is 0.81) and raises a concern that sAUC might be
misleading for model comparison using this dataset.
To further address the concern of center bias, we show in Fig. 6 that the priority maps produced by
SDR and Center Bias are quite different. Fig. 6a plots the distribution of the AUC scores for one
method when the AUC scores of the other method was low (< 0.6). The spread of these distributions
indicate a low correlation between the errors of the two methods. Fig. 6b shows a box plot of the
distribution of KL divergence [6] between the priority maps generated by SDR and Center Bias. For
each category, the mean KL divergence value is high, indicating a large difference between SDR and
Center Bias. For a more qualitative intuition of KL divergence in these distributions, see Figure 2.
The center bias effect in PET and MIT900 is not as pronounced as in POET because there are multiple
target objects in the PET images and the target objects in the MIT900 dataset are relatively small. For
these datasets, Center Bias achieves AUC scores of 0.78 and 0.72, respectively. These numbers are
significantly lower than the results obtained by SDR, which are 0.82 and 0.78, respectively.
5
Conclusions and Future Work
We introduced a classification model based on sparse and diverse region ranking and selection, which
is trained only on image level annotations. We then provided experimental evidence from visual
search tasks under three different conditions to support our hypothesis that these computational
mechanisms might be analogous to computations underlying visual attention processes in the brain.
While this work is not the first to use computer vision models to predict where humans look in visual
search tasks, it is the first to show that core mechanisms driving high model performance in a search
task also predict how humans allocate their attention in the same tasks. By improving upon these core
computational principles, and perhaps by incorporating new ones suggested by attention mechanisms,
our hope is to shed more light on human visual processing.
There are several directions for future work. The first is to create a visual search dataset that mitigates
the center bias effect and avoids cases of trivially easy search. The second is to incorporate into
the current model known factors affecting search, such as a center bias, bottom-up saliency, scene
context, etc., to better predict shifts in human spatial attention.
Acknowledgment. This project was partially supported by the National Science Foundation Awards
IIS-1161876 and IIS-1566248 and the Subsample project from the Digiteo Institute, France.
References
[1] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. In ICLR, 2015.
[2] A. Borji and L. Itti. State-of-the-art in visual attention modeling. PAMI, 35(1):185?207, 2013.
8
[3] A. Borji, H. R. Tavakoli, D. N. Sihite, and L. Itti. Analysis of scores, datasets, and models in visual saliency
prediction. In ICCV, 2013.
[4] N. D. Bruce and J. K. Tsotsos. Saliency, attention, and visual search: An information theoretic approach.
Journal of Vision, 9(3):5?5, 2009.
[5] Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand, A. Oliva, and A. Torralba. Mit saliency benchmark.
http://saliency.mit.edu/.
[6] Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand. What do different evaluation metrics tell us
about saliency models? arXiv preprint arXiv:1604.03605, 2016.
[7] P. Dario, G. Sandini, and P. Aebischer. Robots and biological systems: Towards a new bionics? In NATO
Advanced Workshop, 2012.
[8] K. A. Ehinger, B. Hidalgo-Sotelo, A. Torralba, and A. Oliva. Modelling search for people in 900 scenes: A
combined source model of eye guidance. Visual Cognition, 17(6-7):945?978, 2009.
[9] M. Everingham, S. M. A. Eslami, L. V. Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The pascal
visual object classes challenge: A retrospective. IJCV, 111(1):98?136, 2015.
[10] J. H. Fecteau and D. P. Munoz. Salience, relevance, and firing: a priority map for target selection. Trends
in cognitive sciences, 10(8):382?390, 2006.
[11] S. O. Gilani, R. Subramanian, Y. Yan, D. Melcher, N. Sebe, and S. Winkler. Pet: An eye-tracking dataset
for animal-centric pascal object classes. In ICME, 2015.
[12] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014.
[13] A. Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[14] M. Hoai. Regularized max pooling for image categorization. In Proc. BMVC., 2014.
[15] M. Hoai and A. Zisserman. Improving human action recognition using score distribution and ranking. In
Proc. ACCV, 2014.
[16] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention.
Vision Research, 40(10):1489?1506, 2000.
[17] T. Judd, K. Ehinger, F. Durand, and A. Torralba. Learning to predict where humans look. In Proc. ICCV.
IEEE, 2009.
[18] C. Kanan, M. H. Tong, L. Zhang, and G. W. Cottrell. Sun: Top-down saliency using natural statistics.
Visual Cognition, 17(6-7):979?1003, 2009.
[19] A. Kannan, J. Winn, and C. Rother. Clustering appearance and shape by learning jigsaws. In NIPS. 2007.
[20] C. Koch and S. Ullman. Shifts in selective visual attention: towards the underlying neural circuitry. In
Matters of intelligence, pages 115?141. Springer, 1987.
[21] I. Kokkinos, R. Deriche, T. Papadopoulo, O. Faugeras, and P. Maragos. Towards bridging the Gap between
Biological and Computational Image Segmentation. Research Report RR-6317, INRIA, 2007.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[23] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In CVPR, 2006.
[24] T. S. Lee and X. Y. Stella. An information-theoretic framework for understanding saccadic eye movements.
In NIPS, 1999.
[25] V. Mnih, N. Heess, A. Graves, et al. Recurrent models of visual attention. In NIPS, 2014.
[26] D. P. Papadopoulos, A. D. Clarke, F. Keller, and V. Ferrari. Training object class detectors from eye
tracking data. In ECCV. 2014.
[27] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. IJCV, 2015.
[28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[29] Z. Wei and M. Hoai. Region ranking svms for image classification. In CVPR, 2016.
[30] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and
tell: Neural image caption generation with visual attention. In ICML, 2015.
[31] G. J. Zelinsky, H. Adeli, Y. Peng, and D. Samaras. Modelling eye movements in a categorical search task.
Philosophical Transactions of the Royal Society of London B: Biological Sciences, 368(1628):20130058,
2013.
[32] G. J. Zelinsky and J. W. Bisley. The what, where, and why of priority maps and their interactions with
visual working memory. Annals of the New York Academy of Sciences, 1339(1):154?164, 2015.
[33] L. Zhang, M. H. Tong, T. K. Marks, H. Shan, and G. W. Cottrell. Sun: A bayesian framework for saliency
using natural statistics. Journal of vision, 8(7):32?32, 2008.
[34] B. Zhou, A. Khosla, L. A., A. Oliva, and A. Torralba. Learning Deep Features for Discriminative
Localization. CVPR, 2016.
9
| 6451 |@word stronger:2 kokkinos:1 everingham:1 attended:2 thereby:1 contains:2 score:27 selecting:1 hereafter:1 trainval:2 offering:1 tuned:2 interestingly:1 outperforms:3 current:2 activation:5 si:1 stony:1 must:1 intriguing:2 indistinguishably:1 cottrell:2 informative:1 blur:3 shape:1 enables:1 plot:4 intelligence:1 selected:14 device:3 core:3 papadopoulos:1 stonybrook:2 provides:1 contribute:1 location:11 zhang:2 height:1 direct:1 qualitative:1 consists:1 fixation:27 ijcv:2 combine:1 behavioral:3 manner:1 peng:1 inter:3 notably:3 indeed:1 roughly:2 behavior:7 growing:1 multi:1 brain:3 kiros:1 inspired:3 voc:6 actual:1 increasing:2 spain:1 provided:1 underlying:3 project:2 bike:2 what:3 developed:3 mitigate:1 borji:3 shed:1 classifier:7 appear:1 positive:2 attend:4 local:12 eslami:1 solely:1 firing:1 noteworthy:1 ap:1 might:5 pami:1 bird:1 minimally:1 resembles:1 inria:1 suggests:2 collect:1 compile:1 co:2 bi:7 directed:2 acknowledgment:1 testing:3 union:1 procedure:2 area:7 empirical:2 yan:1 significantly:8 sauc:6 matching:1 word:2 integrating:1 refers:1 diningtable:1 get:1 selection:6 context:4 applying:2 writing:1 map:36 center:22 williams:1 attention:45 cluttered:1 keller:1 importantly:1 searching:5 ferrari:1 analogous:1 annals:1 target:30 hierarchy:1 caption:3 prioritization:2 us:4 hypothesis:1 associate:1 element:2 trend:1 expensive:1 particularly:1 located:1 digiteo:1 recognition:4 predicts:2 labeled:1 narrowing:1 bottom:3 preprint:2 aero:1 capture:1 region:59 ensures:1 connected:2 sun:2 movement:6 removed:2 highest:1 weigh:1 intuition:1 environment:1 asked:1 cam:5 trained:7 weakly:1 raise:1 predictive:2 samara:2 localization:9 purely:1 upon:1 joint:1 cat:5 train:2 distinct:1 forced:1 describe:2 london:1 detected:1 zemel:1 horse:3 aggregate:1 tell:2 exhaustive:1 quite:2 faugeras:1 widely:1 plausible:4 valued:1 larger:1 cvpr:4 otherwise:1 resizing:1 favor:1 statistic:2 winkler:1 simonyan:1 jointly:1 sequence:2 advantage:1 rr:1 dining:1 interaction:1 relevant:1 combining:2 poorly:1 achieve:2 academy:1 intuitive:1 pronounced:1 everyday:1 normalize:1 exploiting:1 sutskever:1 darrell:1 captioning:1 generating:3 categorization:2 object:36 recurrent:2 pose:1 measured:1 strong:2 c:1 indicate:2 differ:1 direction:2 concentrate:1 annotated:1 correct:2 filter:2 subsequently:2 human:28 viewing:1 f1:2 clustered:1 biological:4 extension:1 koch:2 considered:3 credit:1 cb:2 mapping:1 predict:11 cognition:2 circuitry:1 driving:1 achieves:3 torralba:5 smallest:1 released:1 salakhudinov:1 purpose:2 proc:3 overt:1 sofa:2 bag:1 label:2 currently:2 bridge:2 sensitive:1 create:4 weighted:1 hope:2 mit:4 behaviorally:1 always:1 gaussian:6 aim:1 pn:3 avoid:1 resized:6 zhou:1 derived:1 ponce:1 iconic:1 rank:1 indicates:1 modelling:2 contrast:1 suppression:6 baseline:2 detect:1 typically:2 selective:2 france:1 selects:1 pixel:2 overall:2 hossein:2 classification:25 pascal:8 stateof:1 denoted:2 issue:1 animal:5 art:6 spatial:8 fairly:1 field:1 extraction:1 having:1 sampling:1 represents:2 look:3 icml:1 prioritizes:1 future:2 report:2 stimulus:2 deriche:1 randomly:4 simultaneously:2 recognize:2 divergence:6 individual:2 national:1 replaced:1 consisting:1 sdr:47 n1:1 detection:10 interest:1 investigate:1 mnih:2 highly:2 evaluation:8 laborious:1 sheep:3 yielding:1 light:1 accurate:1 jumping:1 fecteau:1 unless:1 desired:1 re:1 guidance:2 mbike:1 girshick:1 instance:5 column:3 modeling:4 compelling:1 subset:3 hundred:1 predictor:1 krizhevsky:1 recognizing:1 reported:2 gregory:2 combined:2 cho:1 person:4 density:2 lee:1 off:1 gaze:2 squared:1 reflect:1 recorded:1 nm:1 zelinsky:3 opposed:2 prioritize:1 possibly:1 huang:1 ambiguity:1 priority:23 cognitive:1 itti:4 return:10 ullman:1 account:2 fixating:1 potential:1 diversity:7 converted:1 accompanied:1 suggesting:1 pooled:1 sec:1 availability:1 matter:1 ranking:8 depends:2 performed:3 observer:4 picked:1 jigsaw:1 apparently:1 wei1:1 tab:2 analyze:1 red:1 sebe:1 inherited:1 annotation:4 hoai:3 bruce:1 contribution:1 minimize:1 square:1 ni:2 publicly:1 convolutional:3 descriptor:1 who:1 accuracy:2 papadopoulo:1 yield:6 saliency:12 correspond:1 bayesian:1 kavukcuoglu:1 produced:3 none:1 worth:1 researcher:1 finer:1 russakovsky:1 classified:1 detector:9 evaluates:1 failure:6 associated:1 dataset:27 begun:1 recall:1 knowledge:1 car:1 color:2 segmentation:2 sophisticated:1 centric:1 originally:1 attained:1 supervised:1 voc2012:1 higher:1 zisserman:3 improved:1 bmvc:1 wei:1 formulation:4 evaluated:1 though:1 box:7 until:1 correlation:3 hand:1 working:1 horizontal:1 su:1 overlapping:1 icme:1 mode:2 perhaps:2 believe:1 building:1 effect:3 dario:1 requiring:2 contain:3 true:2 hence:1 regularization:1 shuffled:1 excluded:1 semantic:3 attractive:1 unavailability:1 during:2 width:4 encourages:1 auc:20 covering:1 distracting:1 theoretic:2 performs:1 covert:1 image:84 ranging:1 harmonic:1 consideration:1 recently:2 lazebnik:1 common:2 discussed:1 refer:1 munoz:1 imposing:2 tuning:3 trivially:1 inclusion:2 had:1 dot:2 access:1 robot:1 inhibition:10 etc:1 dominant:1 recent:3 showed:1 perspective:1 irrelevant:1 discard:1 certain:2 binary:2 success:2 durand:3 yi:3 seen:2 greater:2 deng:1 freely:1 recognized:1 determine:1 paradigm:1 maximize:1 vgg16:3 ii:2 multiple:19 plausibility:1 equally:1 award:1 prediction:10 variant:1 oliva:4 vision:8 metric:2 mit300:1 searcher:1 arxiv:4 kernel:4 represent:2 pyramid:1 achieved:3 affecting:1 krause:1 addressed:1 winn:2 else:2 source:2 biased:2 rest:2 unlike:1 subject:4 pooling:1 incorporates:1 integer:2 near:1 noting:1 presence:1 revealed:1 bernstein:1 easy:1 bengio:1 automated:1 affect:2 psychology:1 cow:6 reduce:1 absent:7 shift:3 whether:4 six:1 allocate:1 bridging:1 aeroplane:3 retrospective:1 york:1 action:3 deep:3 heess:1 generally:1 tune:1 karpathy:1 locally:1 svms:1 category:21 generate:3 http:1 disjoint:1 per:3 blue:1 diverse:4 four:2 threshold:5 demonstrating:1 rrsvm:35 localize:1 kanan:1 changing:1 kept:1 sotelo:1 tsotsos:1 sum:2 enforced:1 everywhere:2 uncertainty:1 decision:6 clarke:1 scaling:1 comparable:1 layer:3 shan:1 display:1 courville:1 sihite:1 yielded:1 constraint:5 incorporation:1 fei:2 scene:9 nearby:2 aspect:2 attempting:1 relatively:1 department:2 poor:1 smaller:1 remain:1 making:1 biologically:3 s1:2 explained:2 iccv:2 pipeline:2 visualization:1 remains:1 turn:2 mechanism:18 needed:1 flip:1 fed:1 end:1 studying:1 available:2 apply:2 observe:1 sidewalk:2 tolerable:1 appearing:1 occurrence:1 alternative:2 gate:1 motorbike:2 existence:2 original:4 assumes:2 denotes:1 remaining:3 top:4 ensure:1 clustering:1 especially:1 approximating:1 society:1 objective:2 noticed:1 already:1 added:1 malik:1 bylinskii:2 saccadic:1 illuminate:1 iclr:2 lateral:1 considers:1 collected:2 pet:8 kannan:1 assuming:1 rother:1 code:1 relationship:3 prompted:2 ratio:2 minimizing:1 unfortunately:1 relate:1 hog:1 sigma:1 stated:1 negative:1 ba:2 implementation:5 satheesh:1 contributed:1 gated:1 allowing:1 observation:1 dispersion:1 datasets:7 sm:2 benchmark:2 minh:1 accv:1 extended:2 incorporated:1 hinton:1 directing:1 locate:1 varied:1 arbitrary:1 bisley:1 prioritizing:1 introduced:1 dog:5 kl:6 connection:1 imagenet:3 optimized:2 philosophical:1 learned:7 barcelona:1 nip:5 brook:1 address:3 able:4 bar:2 suggested:1 usually:1 beyond:1 dimitris:1 sparsity:9 challenge:2 built:1 royal:1 including:1 reliable:1 max:1 gool:1 subramanian:1 power:1 critical:1 overlap:6 natural:4 typing:1 ranked:1 predicting:8 boat:2 warm:2 advanced:1 regularized:1 adeli:1 improve:2 voc2007:1 misleading:1 eye:9 created:5 stella:1 categorical:1 mediated:1 extract:1 schmid:1 text:2 review:1 literature:4 understanding:1 graf:2 loss:1 fully:3 par:1 generation:2 interesting:1 allocation:3 limitation:1 analogy:1 localized:1 annotator:1 validation:4 rcnn:5 digital:3 foundation:1 sandini:1 principle:2 row:1 prone:1 eccv:1 supported:1 last:1 free:2 salience:1 bias:19 institute:1 face:2 emerge:1 sparse:12 curve:1 dimension:4 xn:1 stand:1 judd:4 world:1 avoids:1 rich:1 author:1 made:4 subwindow:1 commonly:1 instructed:1 poet:13 transaction:1 observable:1 nato:1 keep:2 memory:1 global:1 active:1 xi:2 discriminative:3 search:28 khosla:2 why:1 table:7 additionally:1 channel:1 rearranging:1 ignoring:1 improving:2 main:1 spread:2 bounding:5 s2:2 subsample:1 mediate:1 x1:1 augmented:1 fig:6 representative:2 xu:1 roc:1 strengthened:1 ehinger:4 tong:2 precision:1 candidate:1 unfair:1 third:1 learns:1 donahue:1 down:1 specific:1 gating:1 showing:1 mitigates:1 list:1 svm:7 evidence:5 concern:2 incorporating:5 portrays:1 workshop:1 false:1 adding:1 occurring:1 gap:1 intersection:1 led:1 simply:1 likely:2 appearance:1 visual:48 highlighting:1 contained:1 tracking:2 partially:1 isotropically:2 springer:1 corresponds:1 ma:1 goal:1 viewed:4 towards:3 shared:2 absence:1 change:1 specifically:1 except:1 wt:3 called:1 total:2 experimental:1 indicating:3 selectively:1 select:1 berg:1 people:9 support:1 latter:1 mark:1 brevity:2 relevance:1 incorporate:5 |
6,027 | 6,452 | Batched Gaussian Process Bandit Optimization via
Determinantal Point Processes
Tarun Kathuria, Amit Deshpande, Pushmeet Kohli
Microsoft Research
t-takat@microsoft.com, amitdesh@microsoft.com, pkohli@microsoft.com
Abstract
Gaussian Process bandit optimization has emerged as a powerful tool for optimizing
noisy black box functions. One example in machine learning is hyper-parameter
optimization where each evaluation of the target function may require training
a model which may involve days or even weeks of computation. Most methods
for this so-called ?Bayesian optimization? only allow sequential exploration of
the parameter space. However, it is often desirable to propose batches or sets
of parameter values to explore simultaneously, especially when there are large
parallel processing facilities at our disposal. Batch methods require modeling the
interaction between the different evaluations in the batch, which can be expensive
in complex scenarios. In this paper, we propose a new approach for parallelizing
Bayesian optimization by modeling the diversity of a batch via Determinantal
point processes (DPPs) whose kernels are learned automatically. This allows us
to generalize a previous result as well as prove better regret bounds based on
DPP sampling. Our experiments on a variety of synthetic and real-world robotics
and hyper-parameter optimization tasks indicate that our DPP-based methods,
especially those based on DPP sampling, outperform state-of-the-art methods.
1
Introduction
The optimization of an unknown function based on noisy observations is a fundamental problem
in various real world domains, e.g., engineering design [33], finance [36] and hyper-parameter
optimization [29]. In recent years, an increasingly popular direction has been to model smoothness
assumptions about the function via a Gaussian Process (GP), which provides an easy way to compute
the posterior distribution of the unknown function, and thereby uncertainty estimates that help to
decide where to evaluate the function next, in search of an optima. This Bayesian optimization (BO)
framework has received considerable attention in tuning of hyper-parameters for complex models
and algorithms in Machine Learning, Robotics and Computer Vision [16, 31, 29, 12].
Apart from a few notable exceptions [9, 8, 11], most methods for Bayesian optimization work by
exploring one parameter value at a time. However, in many applications, it may be possible and,
moreover, desirable to run multiple function evaluations in parallel. A case in point is when the
underlying function corresponds to a laboratory experiment where multiple experimental setups are
available or when the underlying function is the result of a costly computer simulation and multiple
simulations can be run across different processors in parallel. By parallelizing the experiments,
substantially more information can be gathered in the same time-frame; however, future actions must
be chosen without the benefit of intermediate results. One might conceptualize these problems as
choosing ?batches? of experiments to run simultaneously. The key challenge is to assemble batches
(out of a combinatorially large set of batches) of experiments that both explore the function and
exploit by focusing on regions with high estimated value.
Our Contributions Given that functions sampled from GPs usually have some degree of smoothness,
in the so-called batch Bayesian optimization (BBO) methods, it is desirable to choose batches which
are diverse. Indeed, this is the motivation behind many popular BBO methods like the BUCB [9],
UCB-PE [8] and Local Penalization [11]. Motivated by this long line of work in BBO, we propose
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
a new approach that employs Determinantal Point Processes (DPPs) to select diverse batches of
evaluations. DPPs are probability measures over subsets of a ground set that promote diversity, have
applications in statistical physics and random matrix theory [28, 21], and have efficient sampling
algorithms [17, 18]. The two main ways for fixed cardinality subset selection via DPPs are that of
choosing the subset which maximizes the determinant [DPP-MAX, Theorem 3.3] and sampling a
subset according to the determinantal probability measure [DPP-SAMPLE, Theorem 3.4]. Following
UCB-PE [8], our methods also choose the first point via an acquisition function, and then the rest
of the points are selected from a relevance region using a DPP. Since DPPs crucially depend on the
choice of the DPP kernel, it is important to choose the right kernel. Our method allows the kernel
to change across iterations and automatically compute it based on the observed data. This kernel
is intimately linked to the GP kernel used to model the function; it is in fact exactly the posterior
kernel function of the GP. The acquisition functions we consider are EST [34], a recently proposed
sequential MAP-estimate based Bayesian optimization algorithm with regret bounds independent of
the size of the domain, and UCB [30]. In fact, we show that UCB-PE can be cast into our framework
as just being DPP-MAX where the maximization is done via a greedy selection rule.
Given that DPP-MAX is too greedy, it may be desirable to allow for uncertainty in the observations.
Thus, we define DPP-SAMPLE which selects the batches via sampling subsets from DPPs, and show
that the expected regret is smaller than that of DPP-MAX. To provide a fair comparison with an
existing method, BUCB, we also derive regret bounds for B-EST [Theorem 3.2]. Finally, for all
methods with known regret bounds, the key quantity is the information gain. In the appendix, we also
provide a simpler proof of the information gain for the widely-used RBF kernel which also improves
the bound from O((log T )d+1 ) [26, 30] to O((log T )d ). We conclude with experiments on synthetic
and real-world robotics and hyper-parameter optimization for extreme multi-label classification tasks
which demonstrate that our DPP-based methods, especially the sampling based ones are superior or
competitive with the existing baselines.
Related Work One of the key tasks involved in black box optimization is of choosing actions that
both explore the function and exploit our knowledge about likely high reward regions in the function?s
domain. This exploration-exploitation trade-off becomes especially important when the function
is expensive to evaluate. This exploration-exploitation trade off naturally leads to modeling this
problem in the multi-armed bandit paradigm [25], where the goal is to maximize cumulative reward
by optimally balancing this trade-off. Srinivas et al. [30] analyzed the Gaussian Process Upper
Confidence Bound (GP-UCB) algorithm, a simple and intuitive Bayesian method [3] to achieve the
first sub-linear regret bounds for Gaussian process bandit optimization. These bounds however grow
logarithmically in the size of the (finite) search space.
Recent work by Wang et al. [34] considered an intuitive MAP-estimate based strategy (EST) which
involves estimating the maximum value of a function and choosing a point which has maximum
probability of achieving this maximum value. They derive regret bounds for this strategy and show
that the bounds are actually independent of the size of the search space. The problem setting for both
UCB and EST is of optimizing a particular acquisition function. Other popular acquisition functions
include expected improvement (EI), probability of improvement over a certain threshold (PI). Along
with these, there is also work on Entropy search (ES) [13] and its variant, predictive entropy search
(PES) [14] which instead aims at minimizing the uncertainty about the location of the optimum of
the function. All the fore-mentioned methods, though, are inherently sequential in nature.
The BUCB and UCB-PE both depend on the crucial observation that the variance of the posterior
distribution does not depend on the actual values of the function at the selected points. They exploit
this fact by ?hallucinating? the function values to be as predicted by the posterior mean. The BUCB
algorithm chooses the batch by sequentially selecting the points with the maximum UCB score
keeping the mean function the same and only updating the variance. The problem with this naive
approach is that it is too ?overconfident? of the observations which causes the confidence bounds on
the function values to shrink very quickly as we go deeper into the batch. This is fixed by a careful
initialization and expanding the confidence bounds which leads to regret bounds which are worse
than that of UCB by some multiplicative factor (independent of T and B). The UCB-PE algorithm
chooses the first point of the batch via the UCB score and then defines a ?relevance region? and
selects the remaining points from this region greedily to maximize the information gain, in order to
focus on pure exploration (PE). This algorithm does not require any initialization like the BUCB and,
in fact, achieves better regret bounds than the BUCB.
2
Both BUCB and UCB-PE, however, are too greedy in their selection of batches which may be really
far from the optimal due to our ?immediate overconfidence? of the values. Indeed this is the criticism
of these two methods by a recently proposed BBO strategy PPES [27], which parallelizes predictive
entropy search based methods and shows considerable improvements over the BUCB and UCB-PE
methods. Another recently proposed method is the Local Penalization (LP) [11], which assumes that
the function is Lipschitz continuous and tries to estimate the Lipschitz constant. Since assumptions
of Lipschitz continuity naturally allow one to place bounds on how far the optimum of f is from a
certain location, they work to smoothly reduce the value of the acquisition function in a neighborhood
of any point reflecting the belief about the distance of this point to the maxima. However, assumptions
of Lipschitzness are too coarse-grained and it is unclear how their method to estimate the Lipschitz
constant and modelling of local penalization affects the performance from a theoretical standpoint.
Our algorithms, in constrast, are general and do not assume anything about the function other than it
being drawn from a Gaussian Process.
2
Preliminaries
Gaussian Process Bandit Optimization We address the problem of finding, in the lowest possible
number of iterations, the maximum (m) of an unknown function f : X ? R where X ? Rd , i.e.,
m = f (x? ) = max f (x).
x?X
We consider the domain to be discrete as it is well-known how to obtain regret bounds for continous,
compact domains via suitable discretizations [30]. At each iteration t, we choose a batch {xt,b }1?b?B
of B points and then simultaneously observe the noisy values taken by f at these points, yt,b =
f (xt,b ) + t,b , where t,k is i.i.d. Gaussian noise N (0, ? 2 ). The function is assumed to be drawn
from a Gaussian process (GP), i.e., f ? GP (0, k), where k : X 2 ? R+ is the kernel function. Given
the observations Dt = {(x? , y? )t? =1 } up to time t, we obtain the posterior mean and covariance
functions [24] via the kernel matrix Kt = [k(xi , xj )]xi ,xj ?Dt and kt (x) = [k(xi , x)]xi ?Dt : ?t (x) =
kt (x)T (Kt + ? 2 I)?1 yt and kt (x, x0 ) = k(x, x0 ) ? kt (x)T (Kt + ? 2 I)?1 kt (x0 ). The posterior
variance is given by ?t2 (x) = kt (x, x). Define the Upper Confidence Bound (UCB) f + and Lower
Confidence Bound (LCB) f ? as
1/2
ft+ (x) = ?t?1 (x) + ?t
ft? (x) = ?t?1 (x) ? ?t
1/2
?t?1 (x)
?t?1 (x)
A crucial observation made in BUCB [9] and UCB-PE [8] is that the posterior covariance and variance
functions do not depend on the actual function values at the set of points. The EST algorithm in [34]
chooses at each timestep t,the point which has the maximum posterior probability of attaining the
maximum value m, i.e., the arg maxx?X Pr(Mx |m, Dt ) where Mx is the event that point
x achieves
the maximum value. This turns out to be equal to arg minx?X (m ? ?t (x))/?t (x) . Note that this
actually depends on the value of m which, in most cases, is unknown. [34] get around this by using
an approximation m
? which, under certain conditions specified in their paper, is an upper bound on m.
They provide two ways to get the estimate m,
? namely ESTa and ESTn. We refer the reader to [34]
for details of the two estimates and refer to ESTa as EST.
Assuming that the horizon T is unknown, a strategy has to be good at any iteration. Let rt,b denote the
simple regret, the difference between the value of the maxima and the point queried xt,k , i.e., rt,b =
maxx?X f (x) ? f (xt,b ). While, UCB-PE aims at minimizing a batched cumulative regret, in this
PT PB
paper we will focus on the standard full cumulative regret defined as RT B = t=1 b=1 rt,b . This
models the case where all the queries in a batch should have low regret. The key quantity controlling
the regret bounds of all known BO algorithms is the maximum mutual information that can be gained
about f from T measurements : ?T = maxA?X ,|A|?T I(yA , fA ) = maxA?X ,|A|?T 21 log det(I +
? ?2 KA ), where KA is the (square) submatrix of K formed by picking the row and column indices
corresponding to the set A. The regret for both the UCB and the EST algorithms are presented in the
following theorem which is a combination of Theorem 1 in [30] and Theorem 3.1 in [34].
Theorem 2.1. Let C = 2/ log(1 + ? ?2 ) and fix ? > 0. For UCB, choose ?t = 2 log(|X |t2 ? 2 /6?)
?
t?1 (x) 2
2 2
and for EST, choose ?t = (minx?X m??
?t?1 (x) ) and ?t = 2 log(? t /?). With probability 1 ? ?,
the cumulative regret up to any time step T can be bounded as
(?
CT ?T ?T
RT =
rt ? ?
1/2
1/2
CT ?T (?t? + ?T )
t=1
T
X
for UCB
where t? = arg max ?t .
t
for EST
Determinantal Point Processes Given a DPP kernel K ? Rm?m of m elements {1, . . . , m}, the kDPP distribution defined on 2Y is defined as picking B, a k-subset of [m] with probability proportional
3
Algorithm 1 GP-BUCB/B-EST Algorithm
Input: Decision set X , GP prior ?0 , ?0 , kernel function k(?, ?), feedback mapping f b[?]
for t = 1 to TB do
C 0 2 log(|X |? 2 t2 /6)?
for BUCB
1/2
Choose ?t =
C 0 minx?X (m
? ? ?f b[t] )/?t?1 (x)
for B-EST
1/2
Choose xt = arg maxx?X [?f b[t] (x) + ?t ?t?1 (x)] and compute ?t (?)
if f b[t] < f b[t + 1] then
Obtain yt0 = f (xt0 ) + t0 for t0 ? {f b[t] + 1, . . . , f b[t + 1]} and compute ?f b[t+1] (?)
end if
end for
return arg max yt
t=1...T B
to det(KB ). Formally,
Pr(B) = P
det(KB )
det(KS )
|S|=k
The problem of picking a set of size k which maximizes the determinant and sampling a set according
to the k-DPP distribution has received considerable attention [22, 7, 6, 10, 1, 17]. The maximization
problem in general is NP-hard and furthermore, has a hardness of approximation result of 1/ck for
some c > 1. The best known approximation algorithm is by [22] with a factor of 1/ek , which almost
matches the lower bound. Their algorithm however is a complicated and expensive convex program.
A simple greedy algorithm on the other hand gives a 1/2k log(k) -approximation. For sampling from
k-DPPs, an exact sampling algorithm exists due to [10]. This, however, does not scale to large
datasets. A recently proposed alternative is an MCMC based method by [1] which is much faster.
3
Main Results
In this section, we present our DPP-based algorithms. For a fair comparison of the various methods,
we first prove the regret bounds of the EST version of BUCB, i.e., B-EST. We then show the
equivalence between UCB-PE and UCB-DPP maximization along with showing regret bounds for the
EST version of PE/DPP-MAX. We then present the DPP sampling (DPP-SAMPLE) based methods
for UCB and EST and provide regret bounds. In Appendix 4, while borrowing ideas from [26], we
provide a simpler proof with improved bounds on the maximum information gain for the RBF kernel.
3.1 The Batched-EST algorithm
The BUCB has a feedback mapping f b which indicates that at any given time t (just in this case we
will mean a total of T B timesteps), the iteration upto which the actual function values are available.
In the batched setting, this is just b(t ? 1)/BcB. The BUCB and B-EST, its EST variant algorithms
are presented in Algorithm 1. The algorithm mainly comes from the observation made in [34] that
the point chosen by EST is the same as a variant of UCB. This is presented in the following lemma.
Lemma 3.1. (Lemma 2.1 in [34]) At any timestep t, the point selected by EST is the same as the
1/2
point selected by a variant of UCB with ?t = minx?X (m
? ? ?t?1 (x))/?t?1 (x).
This will be sufficient to get to B-EST as well by just running BUCB with the ?t as defined in
Lemma 3.1 and is also provided in Algorithm 1. In the algorithm, C 0 is chosen to be exp(2C), where
C is an upper bound on the maximum conditional mutual information I(f (x); yf b[t]+1:t?1 |y1:f b[t] )
(refer to [9] for details). The problem with naively using this algorithm is that the value of C 0 , and
correspondingly the regret bounds, usually has at least linear growth in B. This is corrected in [9] by
two-stage BUCB which first chooses an initial batch of size T init by greedily choosing points based
on the (updated) posterior variances. The values are then obtained and the posterior GP is calculated
which is used as the prior GP in Algorithm 1. The C 0 value can then be chosen independent of B.
We refer the reader to the Table 1 in [9] for values of C 0 and T init for common kernels. Finally, the
regret bounds of B-EST are presented in the next theorem.
m??
?
f b[t] (x) 2
Theorem 3.2. Choose ?t = minx?X ?t?1
and ?t = (C 0 )2 ?t , B ? 2, ? > 0 and the C 0
(x)
init
and T
values are chosen according to Table 1 in [9]. At any timestep T , let RT be the cumulative
regret of the two-stage initialized B-EST algorithm. Then
P r{RT ? C 0 RTseq + 2kf k? T init , ?T ? 1} ? 1 ? ?
Proof. The proof is presented in Appendix 1.
4
Algorithm 2 GP-(UCB/EST)-DPP-(MAX/SAMPLE) Algorithm
Input: Decision set X , GP prior ?0 , ?0 , kernel function k(?, ?)
for t = 1 to T do
Compute ?t?1 and
to Bayesian inference.
?t?1 according
2 2
2
log(|X
|?
t
/6)?
for UCB
1/2
Choose ?t =
minx?X (m
? ? ?f b[t] )/?t?1 (x)
for EST
?
xt,1 ? arg maxx?X ?t?1 (x) + ?t ?t?1 (x)
Compute R+
and construct the DPP kernel Kt,1
t
kDPPMaxGreedy(Kt,1 , B ? 1) for DPP-MAX
{xt,b }B
b=2 ?
kDPPSample(Kt,1 , B ? 1)
for DPP-SAMPLE
Obtain yt,b = f (xt,b ) + t,b for b = 1, . . . , B
end for
3.2 Equivalence of Pure Exploration (PE) and DPP Maximization
We now present the equivalence between the Pure Exploration and a procedure which involves DPP
maximization based on the Greedy algorithm. For the next two sections, by an iteration, we mean all
B points selected in that iteration and thus, ?t?1 and kt?1 are computed using (t ? 1)B observations
that are available to us. We first describe a generic framework for BBO inspired by UCB-PE : At
any iteration, the first point is chosen by selecting the one which maximizes UCB or EST which can
be seen as a variant of UCB as per Lemma 3.1. A relevance region R+
t is defined which contains
+
arg maxx?X ft+1
(x) with high probability. Let yt? = ft? (x?t ), where x?t = arg maxx?X ft? (x).
p
?
The relevance region is formally defined as R+
t = {x ? X |?t?1 + 2 ?t+1 ?t?1 (x) ? yt }. The
+
intuition for considering this region is that using Rt guarantees that the queries at iteration t will
leave an impact on the future choices at iteration t + 1. The next B ? 1 points for the batch are
then chosen from R+
t , according to some rule. In the special case of UCB-PE, the B ? 1 points
are selected greedily from R+
t by maximizing the (updated) posterior variance, while keeping the
mean function the same. Now, at the tth iteration, consider the posterior kernel function after xt,1
has been chosen (say kt,1 ) and consider the kernel matrix Kt,1 = I + ? ?2 [kt,1 (pi , pj )]i,j over the
points pi ? R+
t . We will consider this as our DPP kernel at iteration t. Two possible ways of
choosing B ? 1 points via this DPP kernel is to either choose the subset of size B ? 1 of maximum
determinant (DPP-MAX) or sample a set from a (B ? 1)-DPP using this kernel (DPP-SAMPLE). In
this subsection, we focus on the maximization problem. The proof of the regret bounds of UCB-PE
go through a few steps but in one of the intermediate steps (Lemma 5 of [8]), it is shown that the sum
of regrets over a batch at an iteration t is upper bounded as
B
X
b=1
rt,b ?
Y
B
B
B
X
X
(?t,b (xt,b ))2 ?
C2 ? 2 log(1 + ? ?2 ?t,b (xt,b )) = C2 ? 2 log
(1 + ? ?2 ?t,b (xt,b )
b=1
b=1
?2
b=1
?2
where C2 = ? / log(1 + ? ). From the final log-product term, it can be seen (from Schur?s
determinant identity [5] and the definition of ?t,b (xt,b )) that the product of the last B ? 1 terms is
exactly the B ? 1 principal minor of Kt,1 formed by the indices corresponding to S = {xt,b }B
b=2 .
Thus, it is straightforward to see that the UCB-PE algorithm is really just (B ? 1)-DPP maximization
via the greedy algorithm. This connection
will also be useful in the next subsection for DPP
SAMPLE. Thus,
PB
b=1
rt,b ? C2 ? 2 log(1 + ? ?2 ?t,1 (xt,1 )) + log det((Kt,1 )S ) . Finally, for EST-PE,
the proof proceeds like in the B-EST case by realising that EST is just UCB with an adaptive ?t . The
final algorithm (along with its sampling counterpart; details in the next subsection) is presented in
Algorithm 2. The procedure kDPPMaxGreedy(K, k) picks a principal submatrix of K of size k by
the greedy algorithm. Finally, we have the theorem for the regret bounds for (UCB/EST)-DPP-MAX.
?
t?1 (x) 2
Theorem 3.3. At iteration t, let ?t = 2 log(|X |? 2 t2 /6?) for UCB, ?t = (min m??
?t?1 (x) ) and
2 2
?2
?t = 2 log(? t /3?) for EST, C1 = 36/ log(1 + ? ) and fix ? > 0, then,?with probability ? 1 ? ?
the full cumulative regret RT B incurred by UCB-DPP-MAX is RT B ? C1 T B?T ?T B } and that
?
1/2
1/2
for EST-DPP-MAX is RT B ? C1 T B?T B (?t? + ?T ).
Proof. The proof is provided in Appendix 2. It should be noted that the term inside the logarithm in
?t has been multiplied by 2 as compared to the sequential EST, which has a union bound over just
one point, xt . This happens because we will need a union bound over not just xt,b but also x?t .
5
Figure 1: Immediate regret of the algorithms on two synthetic functions with B = 5 and 10
3.3 Batch Bayesian Optimization via DPP Sampling
In the previous subsection, we looked at the regret bounds achieved by DPP maximization. One
natural question to ask is whether the other subset selection method via DPPs, namely DPP sampling,
gives us equivalent or better regret bounds. Note that in this case, the regret would have to be defined
as expected regret. The reason to believe this is well-founded as indeed sampling from k-DPPs
results in better results, in both theory and practice, for low-rank matrix approximation [10] and
exemplar-selection for Nystrom methods [19]. Keeping in line with the framework described in the
previous subsection, the subset to be selected has to be of size B ? 1 and the kernel should be Kt,1 at
any iteration t. Instead of maximizing, we can choose to sample from a (B ? 1)-DPP. The algorithm
is described in Algorithm 2. The kDPPSample(K, k) procedure denotes sampling a set from the
k-DPP distribution with kernel K. The question then to ask is what is the expected regret of this
procedure. In this subsection, we show that the expected regret bounds of DPP-SAMPLE are less
than the regret bounds of DPP-MAX and give a quantitative bound on this regret based on entropy
of DPPs. By entropy of a k-DPP with kernel K, H(k ? DPP(K)), we simply mean the standard
definition of entropy for a discrete distribution. Note that the entropy is always non-negative in this
case. Please see Appendix 3 for details. For brevity, since we always choose B ? 1 elements from
the DPP, we denote H(DP P (K)) to be the entropy of (B ? 1)-DPP for kernel K.
Theorem 3.4. The regret bounds of DPP-SAMPLE are less than that of DPP-MAX. Furthermore, at
?
t?1 (x) 2
2 2
iteration t, let ?t = 2 log(|X |? 2 t2 /6?) for UCB, ?t = (min m??
?t?1 (x) ) and ?t = 2 log(? t /3?)
?2
for EST, C1 = 36/ log(1 + ? ) and fix ? > 0, then the expected full cumulative regret of UCB-DPPSAMPLE satisfies
RT2 B
? 2T BC1 ?T ?T B ?
T
X
H(DP P (Kt,1 )) + B log(|X |)
t=1
and that for EST-DPP-SAMPLE satisfies
1/2
RT2 B ? 2T BC1 (?t
1/2 2
+ ?t
)
?T B ?
T
X
H(DP P (Kt,1 )) + B log(|X |)
t=1
Proof. The proof is provided in Appendix 3.
Note that the regret bounds for both DPP-MAX and DPP-SAMPLE are better than BUCB/B-EST
due to the latter having both an additional factor of B in the log term and a regret multiplier constant
d
C 0 . In fact, for the RBF kernel, C 0 grows like ed which is quite large for even moderate values of d.
4
Experiments
In this section, we study the performance of the DPP-based algorithms, especially DPP-SAMPLE
against some existing baselines. In particular, the methods we consider are BUCB [9], B-EST,
6
UCB-PE/UCB-DPP-MAX [8], EST-PE/EST-DPP-MAX, UCB-DPP-SAMPLE, EST-DPP-SAMPLE
and UCB with local penalization (LP-UCB) [11]. We used the publicly available code for BUCB and
PE1 . The code was modified to include the code for the EST counterparts using code for EST 2 . For
LP-UCB, we use the publicly available GPyOpt codebase 3 and implemented the MCMC algorithm
by [1] for k-DPP sampling with = 0.01 as the variation distance error. We were unable to compare
against PPES as the code was not publicly available. Furthermore, as shown in the experiments
in [27], PPES is very slow and does not scale beyond batch sizes of 4-5. Since UCB-PE almost
always performs better than the simulation matching algorithm of [4] in all experiments that we
could find in previous papers [27, 8], we forego a comparison against simulation matching as well to
avoid clutter in the graphs. The performance is measured after t batch evaluations using immediate
regret, rt = |f (e
xt ) ? f (x? )|, where x? is a known optimizer of f and x
et is the recommendation
of an algorithm after t batch evaluations. We perform 50 experiments for each objective function
and report the median of the immediate regret obtained for each algorithm. To maintain consistency,
the first point of all methods is chosen to be the same (random). The mean function of the prior
GP was the zero function
Pwhile the kernel function was the squared-exponential kernel of the form
k(x, y) = ? 2 exp[?0.5 d (xd ? yd2 )/ld2 ]. The hyper-parameter ? was picked from a broad Gaussian
hyperprior and the the other hyper-parameters were chosen from uninformative Gamma priors.
Our first set of experiments is on a set of synthetic benchmark objective functions including BraninHoo [20], a mixture of cosines [2] and the Hartmann-6 function [20]. We choose batches of size 5
and 10. Due to lack of space, the results for mixture of cosines are provided in Appendix 5 while
the results of the other two are shown in Figure 1. The results suggest that the DPP-SAMPLE
based methods perform superior to the other methods. They do much better than their DPP-MAX
and Batched counterparts. The trends displayed with regards to LP are more interesting. For the
Branin-Hoo, LP-UCB starts out worse than the DPP based algorithms but takes over DPP-MAX
relatively quickly and approaches the performance of DPP-SAMPLE when the batch size is 5. When
the batch size is 10, the performance of LP-UCB does not improve much but both DPP-MAX and
DPP-SAMPLE perform better. For Hartmann, LP-UCB outperforms both DPP-MAX algorithms
by a considerable margin. The DPP-SAMPLE based methods perform better than LP-UCB. The
gap, however, is more for the batch size of 10. Again, the performance of LP-UCB changes much
lesser compared to the performance gain of the DPP-based algorithms. This is likely because the
batches chosen by the DPP-based methods are more ?globally diverse? for larger batch sizes. The
superior performance of the sampling based methods can be attributed to allowing for uncertainty in
the observations by sampling as opposed to greedily emphasizing on maximizing information gain.
We now consider maximization of real-world objective functions. The first function we consider,
robot, returns the walking speed of a bipedal robot [35]. The function?s input parameters, which live
in [0, 1]8 , are the robot?s controller. We add Gaussian noise with ? = 0.1 to the noiseless function.
The second function, Abalone4 is a test function used in [8]. The challenge of the dataset is to predict
the age of a species of sea snails from physical measurements. Similar to [8], we will use it as a
maximization problem. Our final experiment is on hyper-parameter tuning for extreme multi-label
learning. In extreme classification, one needs to deal with multi-class and multi-label problems
involving a very large number of categories. Due to the prohibitively large number of categories,
running traditional machine learning algorithms is not feasible. A recent popular approach for extreme
classification is the FastXML algorithm [23]. The main advantage of FastXML is that it maintains
high accuracy while training in a fraction of the time compared to the previous state-of-the-art. The
FastXML algorithm has 5 parameters and the performance depends on these hyper-parameters, to a
reasonable amount. Our task is to perform hyper-parameter optimization on these 5 hyper-parameters
with the aim to maximize the Precision@k for k = 1, which is the metric used in [23] to evaluate
the performance of FastXML compared to other algorithms as well. While the authors of [23] run
extensive tests on a variety of datasets, we focus on two small datasets : Bibtex [15] and Delicious[32].
As before, we use batch sizes of 5 and 10. The results for Abalone and the FastXML experiment on
Delicious are provided in the appendix. The results for Prec@1 for FastXML on the Bibtex dataset
1
http://econtal.perso.math.cnrs.fr/software/
https://github.com/zi-w/EST
3
http://sheffieldml.github.io/GPyOpt/
4
The Abalone dataset is provided by the UCI
http://archive.ics.uci.edu/ml/datasets/Abalone
2
7
Machine
Learning
Repository
at
Figure 2: Immediate regret of the algorithms for Prec@1 for FastXML on Bibtex and Robot with B = 5 and 10
and for the robot experiment are provided in Figure 2. The blue horizontal line for the FastXML
results indicates the maximum Prec@k value found using grid search.
The results for robot indicate that while DPP-MAX does better than their Batched counterparts, the
difference in the performance between DPP-MAX and DPP-SAMPLE is much less pronounced for
a small batch size of 5 but is considerable for batch sizes of 10. This is in line with our intuition
about sampling being more beneficial for larger batch sizes. The performance of LP-UCB is quite
close and slightly better than UCB-DPP-SAMPLE. This might be because the underlying function is
well-behaved (Lipschitz continuous) and thus, the estimate for the Lipschitz constant might be better
which helps them get better results. This improvement is more pronounced for batch size of 10 as
well. For Abalone (see Appendix 5), LP does better than DPP-MAX but there is a reasonable gap
between DPP-SAMPLE and LP which is more pronounced for B = 10.
The results for Prec@1 for the Bibtex dataset for FastXML are more interesting. Both DPP based
methods are much better than their Batched counterparts. For B = 5, DPP-SAMPLE is only slightly
better than DPP-MAX. LP-UCB starts out worse than DPP-MAX but starts doing comparable to
DPP-MAX after a few iterations. For B = 10, there is not a large improvement in the gap between
DPP-MAX and DPP-SAMPLE. LP-UCB however, quickly takes over UCB-DPP-MAX and comes
quite close to the performance of DPP-SAMPLE after a few iterations. For the Delicious dataset (see
Appendix 5), we see a similar trend of the improvement of sampling to be larger for larger batch sizes.
LP-UCB displays an interesting trend in this experiment by doing much better than UCB-DPP-MAX
for B = 5 and is in fact quite close to the performance of DPP-SAMPLE. However, for B = 10, its
performance is much closer to UCB-DPP-MAX. DPP-SAMPLE loses out to LP-UCB only on the
robot dataset and does better for all the other datasets. Furthermore, this improvement seems more
pronounced for larger batch sizes. We leave experiments with other kernels and a more thorough
experimental evaluation with respect to batch sizes for future work.
5
Conclusion
We have proposed a new method for batched Gaussian Process bandit (batch Bayesian) optimization
based on DPPs which are desirable in this case as they promote diversity in batches. The DPP kernel
is automatically figured out on the fly which allows us to show regret bounds for DPP maximization
and sampling based methods for this problem. We show that this framework exactly recovers a
popular algorithm for BBO, namely the UCB-PE when we consider DPP maximization using the
greedy algorithm. We showed that the regret for the sampling based method is always less than the
maximization based method. We also derived their EST counterparts and also provided a simpler
proof of the information gain for RBF kernels which leads to a slight improvement in the best bound
known. Our experiments on a variety of synthetic and real-world tasks validate our theoretical claims
that sampling performs better than maximization and other methods.
8
References
[1] N. Anari, S.O. Gharan, and A. Rezaei. Monte carlo markov chains algorithms for sampling strongly
rayleigh distributions and determinantal point processes. COLT, 2016.
[2] B.S. Anderson, A.W. Moore, and D. Cohn. A nonparametric approach to noisy and costly optimization.
ICML, 2000.
[3] P. Auer. Using confidence bounds for exploration-exploitation trade-offs. JMLR, 3:397?422, 2002.
[4] J. Azimi, A. Fern, and X. Fern. Batch bayesian optimization via simulation matching. 2010.
[5] R. Brualdi and H. Schneider. Determinantal identities: Gauss, schur, cauchy, sylvester, kronecker, jacobi,
binet, laplace, muir, and cayley. Linear Algebra and its Applications, 1983.
[6] A. ?ivril and M. Magdon-Ismail. On selecting a maximum volume sub-matrix of a matrix and related
problems. Theor. Comput. Sci., 410(47-49):4801?4811, 2009.
[7] A. ?ivril and M. Magdon-Ismail. Exponential inapproximability of selecting a maximum volume submatrix. Algorithmica, 65(1):159?176, 2013.
[8] E. Contal, D. Buffoni, D. Robicquet, and N. Vayatis. Parallel gaussian process optimization with upper
confidence bound and pure exploration. ECML, 2013.
[9] T. Desautels, A. Krause, and J.W. Burdick. Parallelizing exploration-exploitation tradeoffs in gaussian
process bandit optimization. JMLR, 15:4053?4103, 2014.
[10] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection. FOCS,
2010.
[11] J. Gonzalez, Z. Dai, P. Hennig, and N. Lawrence. Batch bayesian optimization via local penalization.
AISTATS, 2016.
[12] J. Gonz?lez, M. A. Osborne, and N. D. Lawrence. GLASSES: relieving the myopia of bayesian optimisation.
AISTATS, 2016.
[13] P. Hennig and C. Schuler. Entropy search for information-efficient global optimization. JMLR, 13, 2012.
[14] J.M. Hernandex-Lobato, M.W. Hoffman, and Z. Ghahramani. Predicitive entropy search for efficient global
optimization of black-box functions. NIPS, 2014.
[15] I. Katakis, G. Tsoumakas, and I. Vlahavas. Multilabel text classification for automated tag suggestion.
ECML/PKDD Discovery Challenge, 2008.
[16] A. Krause and C. S. Ong. Contextual gaussian process bandit optimization. NIPS, 2011.
[17] Alex Kulesza and Ben Taskar. k-dpps: Fixed-size determinantal point processes. In ICML, 2011.
R
[18] Alex Kulesza and Ben Taskar. Determinantal Point Processes for Machine Learning. Found. Trends
Mach. Learn., (2-3):123?286, 2012.
[19] C. Li, S. Jegelka, and S. Sra. Fast dpp sampling for nystr?m with application to kernel methods. ICML,
2016.
[20] D. Lizotte. Pratical bayesian optimization. PhD thesis, University of Alberta, 2008.
[21] R. Lyons. Determinantal probability measures. Publications Math?matiques de l?Institut des Hautes
?tudes Scientifiques, 98(1):167?212, 2003.
[22] A. Nikolov. Randomized rounding for the largest simplex problem. In STOC, pages 861?870, 2015.
[23] Y. Prabhu and M. Varma. Fastxml: A fast, accurate and stable tree-classifier for extreme multi-label
learning. KDD, 2014.
[24] C. Rasmussen and C. Williams. Gaussian processes for machine learning. MIT Press, 2008.
[25] H. Robbins. Some aspects of the sequential design of experiments. Bul. Am. Math. Soc., 1952.
[26] M. W. Seeger, S. M. Kakade, and D. P. Foster. Information consistency of nonparametric gaussian process
methods. IEEE Tr. Inf. Theo., 54(5):2376?2382, 2008.
[27] A. Shah and Z. Ghahramani. Parallel predictive entropy search for batch global optimization of expensive
objective functions. NIPS, 2015.
[28] T. Shirai and Y. Takahashi. Random point fields associated with certain fredholm determinants i: fermion,
poisson and boson point processes. Journal of Functional Analysis, 205(2):414 ? 463, 2003.
[29] J. Snoek, H. Larochelle, and R.P. Adams. Practical bayesian optimization of machine learning. NIPS,
2012.
[30] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Information-theoretic regret bounds for gaussian process
optimization in the bandit setting. IEEE Transactions on Information Theory, 58(5):3250?3265, 2012.
[31] C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown. Auto-weka : combined selection and hyperparameter optimization of classification algorithms. KDD, 2003.
[32] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and efficient multilabel classification in domains with
large number of labels. ECML/PKDD 2008 Workshop on Mining Multidimensional Data, 2008.
[33] G. Wang and S. Shan. Review of metamodeling techniques in support of engineering design optimization.
Journal of Mechanical Design, 129:370?380, 2007.
[34] Z. Wang, B. Zhou, and S. Jegelka. Optimization as estimation with gaussian processes in bandit settings.
AISTATS, 2016.
[35] E. Westervelt and J. Grizzle. Feedback control of dynamic bipedal robot locomotion. Control and
Automation Series, 2007.
[36] W. Ziemba and R. Vickson. Stochastic optimization models in finance. World Scientific Singapore, 2006.
9
| 6452 |@word kohli:1 determinant:5 version:2 exploitation:4 repository:1 seems:1 simulation:5 crucially:1 covariance:2 pick:1 thereby:1 nystr:1 tr:1 initial:1 contains:1 score:2 selecting:4 series:1 bibtex:4 outperforms:1 existing:3 ka:2 com:4 contextual:1 must:1 determinantal:10 pe1:1 kdd:2 burdick:1 greedy:8 selected:7 ziemba:1 provides:1 coarse:1 math:3 location:2 scientifiques:1 simpler:3 branin:1 along:3 c2:4 focs:1 prove:2 fermion:1 inside:1 x0:3 snoek:1 expected:6 hardness:1 indeed:3 pkdd:2 multi:6 inspired:1 globally:1 alberta:1 automatically:3 actual:3 armed:1 lyon:1 cardinality:1 considering:1 becomes:1 spain:1 estimating:1 moreover:1 underlying:3 maximizes:3 bounded:2 katakis:2 lowest:1 what:1 provided:8 substantially:1 maxa:2 finding:1 lipschitzness:1 guarantee:1 quantitative:1 thorough:1 multidimensional:1 growth:1 finance:2 xd:1 exactly:3 prohibitively:1 rm:1 pratical:1 classifier:1 control:2 before:1 engineering:2 local:5 io:1 pkohli:1 mach:1 contal:1 black:3 might:3 initialization:2 k:1 bc1:2 equivalence:3 snail:1 sheffieldml:1 practical:1 union:2 regret:46 practice:1 procedure:4 discretizations:1 maxx:6 matching:3 confidence:7 lcb:1 suggest:1 get:4 close:3 selection:7 live:1 equivalent:1 map:2 yt:6 maximizing:3 lobato:1 go:2 attention:2 straightforward:1 williams:1 convex:1 constrast:1 pure:4 rule:2 fastxml:10 varma:1 variation:1 laplace:1 updated:2 target:1 pt:1 controlling:1 exact:1 gps:1 locomotion:1 logarithmically:1 element:2 expensive:4 trend:4 updating:1 walking:1 cayley:1 yd2:1 observed:1 ft:5 taskar:2 fly:1 wang:3 region:8 trade:4 mentioned:1 intuition:2 reward:2 ong:1 dynamic:1 multilabel:2 depend:4 algebra:1 predictive:3 various:2 fast:2 describe:1 effective:1 monte:1 query:2 metamodeling:1 hyper:11 choosing:6 neighborhood:1 whose:1 emerged:1 widely:1 quite:4 larger:5 say:1 gp:13 noisy:4 final:3 thornton:1 advantage:1 propose:3 interaction:1 product:2 fr:1 parallelizes:1 uci:2 achieve:1 ismail:2 intuitive:2 pronounced:4 validate:1 optimum:3 sea:1 rademacher:1 adam:1 leave:2 ben:2 help:2 derive:2 boson:1 rt2:2 measured:1 exemplar:1 minor:1 received:2 soc:1 implemented:1 predicted:1 involves:2 indicate:2 come:2 larochelle:1 direction:1 perso:1 stochastic:1 kb:2 exploration:9 tsoumakas:2 require:3 fix:3 really:2 preliminary:1 theor:1 exploring:1 around:1 considered:1 ground:1 ic:1 exp:2 lawrence:2 mapping:2 week:1 predict:1 claim:1 achieves:2 optimizer:1 estimation:1 label:5 robbins:1 largest:1 combinatorially:1 tudes:1 tool:1 hoffman:1 offs:1 mit:1 gaussian:19 always:4 aim:3 modified:1 ck:1 avoid:1 zhou:1 gharan:1 publication:1 derived:1 focus:4 improvement:8 modelling:1 indicates:2 mainly:1 rank:1 seeger:2 greedily:4 baseline:2 criticism:1 glass:1 am:1 inference:1 lizotte:1 cnrs:1 borrowing:1 bandit:10 selects:2 arg:8 classification:6 hartmann:2 colt:1 art:2 conceptualize:1 special:1 mutual:2 equal:1 construct:1 field:1 having:1 sampling:26 brualdi:1 broad:1 icml:3 promote:2 future:3 simplex:1 t2:5 np:1 report:1 few:4 employ:1 simultaneously:3 gamma:1 algorithmica:1 microsoft:4 maintain:1 mining:1 evaluation:7 analyzed:1 extreme:5 mixture:2 bipedal:2 behind:1 chain:1 kt:21 accurate:1 closer:1 institut:1 tree:1 logarithm:1 initialized:1 hyperprior:1 vickson:1 theoretical:2 hutter:1 column:2 modeling:3 maximization:14 subset:10 rounding:1 too:4 optimally:1 synthetic:5 chooses:4 combined:1 fundamental:1 randomized:1 physic:1 off:3 picking:3 quickly:3 lez:1 squared:1 again:1 thesis:1 opposed:1 choose:14 worse:3 ek:1 return:2 forego:1 li:1 takahashi:1 diversity:3 attaining:1 relieving:1 de:2 automation:1 notable:1 depends:2 multiplicative:1 try:1 picked:1 azimi:1 linked:1 doing:2 bucb:19 competitive:1 start:3 maintains:1 parallel:5 complicated:1 contribution:1 square:1 formed:2 publicly:3 accuracy:1 variance:6 gathered:1 generalize:1 bayesian:15 fern:2 fredholm:1 carlo:1 fore:1 processor:1 myopia:1 ed:1 definition:2 against:3 acquisition:5 deshpande:2 involved:1 nystrom:1 naturally:2 proof:11 attributed:1 recovers:1 jacobi:1 associated:1 sampled:1 gain:7 dataset:6 popular:5 ask:2 knowledge:1 subsection:6 improves:1 actually:2 reflecting:1 auer:1 focusing:1 disposal:1 dt:4 day:1 improved:1 done:1 box:3 though:1 shrink:1 furthermore:4 just:8 stage:2 ld2:1 strongly:1 anderson:1 hand:1 horizontal:1 ei:1 cohn:1 lack:1 continuity:1 defines:1 yf:1 behaved:1 scientific:1 believe:1 grows:1 brown:1 multiplier:1 binet:1 counterpart:6 facility:1 laboratory:1 moore:1 deal:1 please:1 noted:1 anything:1 robicquet:1 cosine:2 abalone:4 muir:1 theoretic:1 demonstrate:1 performs:2 matiques:1 recently:4 superior:3 kathuria:1 common:1 functional:1 physical:1 volume:3 slight:1 refer:4 measurement:2 queried:1 dpps:12 smoothness:2 tuning:2 rd:1 consistency:2 grid:1 robot:8 stable:1 add:1 posterior:12 grizzle:1 recent:3 showed:1 optimizing:2 moderate:1 apart:1 inf:1 scenario:1 gonz:1 certain:4 delicious:3 seen:2 additional:1 dai:1 schneider:1 paradigm:1 maximize:3 nikolov:1 multiple:3 desirable:5 full:3 match:1 faster:1 long:1 impact:1 variant:5 involving:1 controller:1 vision:1 noiseless:1 metric:1 sylvester:1 optimisation:1 iteration:18 kernel:32 poisson:1 robotics:3 achieved:1 c1:4 buffoni:1 vayatis:1 uninformative:1 krause:3 grow:1 median:1 crucial:2 standpoint:1 rest:1 archive:1 anari:1 schur:2 intermediate:2 easy:1 automated:1 variety:3 affect:1 xj:2 timesteps:1 codebase:1 zi:1 reduce:1 idea:1 lesser:1 tradeoff:1 weka:1 det:5 t0:2 whether:1 motivated:1 hallucinating:1 cause:1 action:2 useful:1 involve:1 amount:1 clutter:1 nonparametric:2 category:2 tth:1 http:4 outperform:1 ppes:3 singapore:1 estimated:1 per:1 blue:1 diverse:3 discrete:2 hyperparameter:1 hennig:2 key:4 threshold:1 pb:2 achieving:1 drawn:2 pj:1 figured:1 timestep:3 graph:1 fraction:1 year:1 sum:1 run:4 powerful:1 uncertainty:4 place:1 almost:2 reader:2 decide:1 reasonable:2 gonzalez:1 decision:2 appendix:10 comparable:1 submatrix:3 bound:44 ct:2 shan:1 display:1 assemble:1 kronecker:1 alex:2 westervelt:1 software:1 tag:1 aspect:1 speed:1 min:2 relatively:1 according:5 overconfident:1 combination:1 hoo:1 smaller:1 across:2 increasingly:1 intimately:1 beneficial:1 slightly:2 lp:16 kakade:2 happens:1 pr:2 taken:1 turn:1 end:3 available:6 magdon:2 multiplied:1 observe:1 upto:1 generic:1 prec:4 vlahavas:2 batch:43 alternative:1 shah:1 assumes:1 remaining:1 include:2 running:2 denotes:1 exploit:3 ghahramani:2 amit:1 especially:5 objective:4 question:2 quantity:2 looked:1 shirai:1 strategy:4 costly:2 rt:15 fa:1 traditional:1 unclear:1 minx:6 dp:3 mx:2 distance:2 unable:1 sci:1 cauchy:1 prabhu:1 reason:1 assuming:1 code:5 index:2 minimizing:2 setup:1 stoc:1 negative:1 design:4 unknown:5 perform:5 allowing:1 upper:6 observation:9 datasets:5 markov:1 benchmark:1 finite:1 displayed:1 ecml:3 immediate:5 frame:1 y1:1 parallelizing:3 cast:1 namely:3 specified:1 extensive:1 continous:1 connection:1 mechanical:1 learned:1 barcelona:1 nip:5 address:1 beyond:1 proceeds:1 usually:2 kulesza:2 challenge:3 tb:1 program:1 max:33 including:1 belief:1 suitable:1 event:1 natural:1 kdpp:1 improve:1 github:2 naive:1 auto:1 text:1 prior:5 review:1 discovery:1 kf:1 interesting:3 suggestion:1 proportional:1 age:1 penalization:5 incurred:1 degree:1 desautels:1 jegelka:2 sufficient:1 foster:1 pi:3 balancing:1 row:2 yt0:1 last:1 keeping:3 rasmussen:1 theo:1 allow:3 deeper:1 correspondingly:1 benefit:1 regard:1 dpp:92 feedback:3 calculated:1 world:6 cumulative:7 author:1 made:2 adaptive:1 founded:1 far:2 pushmeet:1 transaction:1 compact:1 ml:1 global:3 sequentially:1 rezaei:1 conclude:1 assumed:1 xi:4 search:10 continuous:2 table:2 nature:1 schuler:1 learn:1 expanding:1 inherently:1 sra:1 init:4 complex:2 domain:6 aistats:3 main:3 motivation:1 noise:2 osborne:1 fair:2 overconfidence:1 realising:1 batched:8 slow:1 precision:1 sub:2 exponential:2 comput:1 pe:23 ivril:2 jmlr:3 grained:1 theorem:12 emphasizing:1 xt:18 showing:1 exists:1 naively:1 workshop:1 sequential:5 gained:1 phd:1 horizon:1 margin:1 gap:3 entropy:11 smoothly:1 rayleigh:1 hoos:1 simply:1 explore:3 likely:2 xt0:1 bo:2 recommendation:1 inapproximability:1 corresponds:1 loses:1 satisfies:2 leyton:1 conditional:1 goal:1 identity:2 bul:1 rbf:4 careful:1 lipschitz:6 considerable:5 change:2 esta:2 hard:1 feasible:1 corrected:1 lemma:6 principal:2 called:2 total:1 specie:1 experimental:2 e:1 ya:1 est:44 ucb:60 gauss:1 exception:1 select:1 formally:2 hautes:1 support:1 bcb:1 latter:1 brevity:1 relevance:4 evaluate:3 mcmc:2 srinivas:2 |
6,028 | 6,453 | Using Social Dynamics to Make Individual Predictions:
Variational Inference with a Stochastic Kinetic Model
Zhen Xu, Wen Dong, and Sargur Srihari
Department of Computer Science and Engineering
University at Buffalo
{zxu8,wendong,srihari}@buffalo.edu
Abstract
Social dynamics is concerned primarily with interactions among individuals and the
resulting group behaviors, modeling the temporal evolution of social systems via
the interactions of individuals within these systems. In particular, the availability of
large-scale data from social networks and sensor networks offers an unprecedented
opportunity to predict state-changing events at the individual level. Examples
of such events include disease transmission, opinion transition in elections, and
rumor propagation. Unlike previous research focusing on the collective effects
of social systems, this study makes efficient inferences at the individual level. In
order to cope with dynamic interactions among a large number of individuals, we
introduce the stochastic kinetic model to capture adaptive transition probabilities
and propose an efficient variational inference algorithm the complexity of which
grows linearly ? rather than exponentially? with the number of individuals.
To validate this method, we have performed epidemic-dynamics experiments on
wireless sensor network data collected from more than ten thousand people over
three years. The proposed algorithm was used to track disease transmission and
predict the probability of infection for each individual. Our results demonstrate
that this method is more efficient than sampling while nonetheless achieving high
accuracy.
1
Introduction
The field of social dynamics is concerned primarily with interactions among individuals and the
resulting group behaviors. Research in social dynamics models the temporal evolution of social
systems via the interactions of the individuals within these systems [9]. For example, opinion
dynamics can model the opinion state transitions of an entire population in an election scenario [3],
and epidemic dynamics can predict disease outbreaks ahead of time [10]. While traditional socialdynamics models focus primarily on the macroscopic effects of social systems, often we instead
wish to know the answers to more specific questions. Given the movement and behavior history
of a subject with Ebola, can we tell how many people should be tested or quarantined? City-size
quarantine is not necessary, but family-size quarantine is insufficient. We aim to model a method to
evaluate the paths of illness transmission and the risks of infection for individuals, so that limited
medical resources can be most efficiently distributed.
The rapid growth of both social networks and sensor networks offers an unprecedented opportunity
to collect abundant data at the individual level. From these data we can extract temporal interactions
among individuals, such as meeting or taking the same class. To take advantage of this opportunity, we model social dynamics from an individual perspective. Although such an approach has
considerable potential, in practice it is difficult to model the dynamic interactions and handle the
costly computations when a large number of individuals are involved. In this paper, we introduce an
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
event-based model into social systems to characterize their temporal evolutions and make tractable
inferences on the individual level.
Our research on the temporal evolutions of social systems is related to dynamic Bayesian networks
and continuous time Bayesian networks [13, 18, 21]. Traditionally, a coupled hidden Markov model
is used to capture the interactions of components in a system [2], but this model does not consider
dynamic interactions. However, a stochastic kinetic model is capable of successfully describing the
interactions of molecules (such as collisions) in chemical reactions [12, 22], and is widely used in
many fields such as chemistry and cell biology [1, 11]. We introduce this model into social dynamics
and use it to focus on individual behaviors.
A challenge in capturing the interactions of individuals is that in social dynamics the state space grows
exponentially with the number of individuals, which makes exact inference intractable. To resolve
this we must apply approximate inference methods. One class of these involves sampling-based
methods. Rao and Teh introduce a Gibbs sampler based on local updates [20], while Murphy and
Russell introduce Rao-Blackwellized particle filtering for dynamic Bayesian networks [17]. However,
sampling-based methods sometimes mix slowly and require a large number of samples/particles. To
demonstrate this issue, we offer empirical comparisons with two major sampling methods in Section
4. An alternative class of approximations is based on variational inference. Opper and Sanguinetti
apply the variational mean field approach to factor a Markov jump process [19], and Cohn and El-Hay
further improve its efficiency by exploiting the structure of the target network [4]. A problem is that
in an event-based model such as a stochastic kinetic model (SKM), the variational mean field is not
applicable when a single event changes the states of two individuals simultaneously. Here, we use a
general expectation propagation principle [14] to design our algorithm.
This paper makes three contributions: First, we introduce the discrete event model into social
dynamics and make tractable inferences on both individual behaviors and collective effects. To this
end, we apply the stochastic kinetic model to define adaptive transition probabilities that characterize
the dynamic interaction patterns in social systems. Second, we design an efficient variational inference
algorithm whose computation complexity grows linearly with the number of individuals. As a result,
it scales very well in large social systems. Third, we conduct experiments on epidemic dynamics to
demonstrate that our algorithm can track the transmission of epidemics and predict the probability of
infection for each individual. Further, we demonstrate that the proposed method is more efficient
than sampling while nonetheless achieving high accuracy.
The remainder of this paper is organized as follows. In Section 2, we briefly review the coupled hidden
Markov model and the stochastic kinetic model. In Section 3, we propose applying a variational
algorithm with the stochastic kinetic model to make tractable inferences in social dynamics. In
Section 4, we detail empirical results from applying the proposed algorithm to our epidemic data
along with the proximity data collected from sensor networks. Section 5 concludes.
2
2.1
Background
Coupled Hidden Markov Model
A coupled hidden Markov model (CHMM) captures the dynamics of a discrete time Markov process
that joins a number of distinct hidden Markov models (HMMs), as shown in Figure 2.1(a). xt =
(1)
(M )
(m)
(xt , . . . , xt ) defines the hidden states of all HMMs at time t, and xt is the hidden state of
(1)
(M )
(m)
HMM m at time t. yt = (yt , . . . , yt ) are observations of all HMMs at time t, and yt is
the observation of HMM m at time t. P (xt |xt 1 ) are transition probabilities, and P (yt |xt ) are
emission probabilities for CHMM. Given hidden states, all observations are independent. As such,
Q
(m) (m)
(m) (m)
P (yt |xt ) = m P (yt |xt ), where P (yt |xt ) is the emission probability for HMM m at
time t. The joint probability of CHMM can be defined as follows:
P (x1,...,T , y1,...,T ) =
T
Y
t=1
P (xt |xt
1 )P (yt |xt ).
(1)
For a CHMM that contains M HMMs in a binary state, the state space is 2M , and the state transition
kernel is a 2M ? 2M matrix. In order to make exact inferences, the classic forward-backward
algorithm sweeps a forward/filtering pass to compute the forward statistics ?t (xt ) = P (xt |y1,...,t )
2
vt
HMM 1
...
HMM 2
...
HMM 3
...
x1,t-1
x1,t
x1,t+1
y1,t-1
y1,t
y1,t+1
x2,t-1
x2,t
x2,t+1
y2,t-1
y2,t
y2,t+1
x3,t-1
x3,t
x3,t+1
y3,t-1
y3,t
y3,t+1
t-1
t
HMM 1
...
...
...
HMM 2
...
...
HMM 3
...
vt+1
x1,t-1
x1,t
x1,t+1
y1,t-1
y1,t
y1,t+1
x2,t-1
x2,t
x2,t+1
y2,t-1
y2,t
y2,t+1
x3,t-1
x3,t
x3,t+1
y3,t-1
y3,t
y3,t+1
t-1
t+1
t
...
...
...
t+1
Time
Time
(a)
(b)
Figure 1: Illustration of (a) Coupled Hidden Markov Model, (b) Stochastic Kinetic Model.
P (y
|x )
t
t+1,...,T
and a backward/smoothing pass to estimate the backward statistics t (xt ) = P (yt+1,...,T
|y1,...,t ) .
Then it can estimate the one-slice statistics t (xt ) = P (xt |y1,...,T ) = ?t (xt ) t (xt ) and two-slice
(xt |xt 1 )P (yt |xt ) t (xt )
statistics ?t (xt 1 , xt ) = P (xt 1 , xt |y1,...,T ) = ?t 1 (xt 1 )P
. Its complexity
P (yt |y1,...,t 1 )
grows exponentially with the number of HMM chains. In order to make tractable inferences, certain
factorizations and approximations must be applied. In the next section, we introduce a stochastic
kinetic model to lower the dimensionality of transition probabilities.
2.2
The Stochastic Kinetic Model
A stochastic kinetic model describes the temporal evolution of a chemical system with M species
X = {X1 , X2 , ? ? ? , XM } driven by V events (or chemical reactions) parameterized by rate constants
c = (c1 , . . . , cV ). An event (chemical reaction) k has a general form as follows:
c
k
r1 X 1 + ? ? ? + rM X M !
p1 X 1 + ? ? ? + pM X M .
The species on the left are called reactants, and rm is the number of mth reactant molecules consumed
during the reaction. The species on the right are called products, and pm is the number of mth product
molecules produced in the reaction. Species involved in the reaction (rm > 0) without consumption
or production (rm = pm ) are called catalysts. At any specific time t, the populations of the species
(1)
(M )
is xt = (xt , . . . , xt ). An event k happens with rate hk (xt , ck ), determined by the rate constant
and the current population state [22]:
hk (xt , ck ) =ck gk (xt ) = ck
M
Y
(m)
(m)
gk (xt
(2)
).
m=1
The form of gk (xt ) depends on the reaction. In our case, we adopt the product form
QM
(m) (m)
), which represents the total number of ways that reactant molecules can be selected
m=1 gk (xt
to trigger event k [22]. Event k changes the populations by k = xt xt 1 . The probability that
event k will occur during time interval (t, t + dt] is hk (xt , ck )dt. We assume at each discrete time
step that no more than one event will occur. This assumption follows the linearization principle in the
literature [18], and is valid when the discrete time step is small. We treat each discrete time step as a
unit of time, so that hk (xt , ck ) represents the probability of an event.
c
i
In epidemic modeling, for example, an infection event vi has the form S + I !
2I, such that a
susceptible individual (S) is infected by an infectious individual (I) with rate constant ci . If there is
only one susceptible individual (type m = 1) and one infectious individual (type m = 2) involved in
this event, hi (xt , ci ) = ci , i = [ 1 1]T and P (xt xt 1 = i ) = P (xt |xt 1 , vi ) = ci .
In a traditional hidden Markov model, the transition kernel is typically fixed. In comparison, SKM
is better at capturing dynamic interactions in terms of the events with rates dependent on reactant
populations, as shown in Eq.(2).
3
3
Variational Inference with the Stochastic Kinetic Model
In this section, we define the likelihood of the entire sequence of hidden states and observations for
an event-based model, and derive a variational inference algorithm and parameter-learning algorithm.
3.1
Likelihood for Event-based Model
In social dynamics, we use a discrete time Markov model to describe the temporal evolutions of a set
of individuals x(1) , . . . , x(M ) according to a set of V events. To cope with dynamic interactions, we
introduce the SKM and express the state transition probabilities in terms of event probabilities, as
shown in Figure 2.1(b). We assume at each discrete time step that no more than one event will occur.
Let v1 , . . . , vT be a sequence of events, x1 , . . . , xT a sequence of hidden states, and y1 , . . . , yT a
set of observations. Similar to Eq.(1), the likelihood of the entire sequence is as follows:
P (x1,...,T , y1,...,T , v1,...,T ) =
P (xt , vt |xt
1)
=
?
T
Y
t=1
P (xt , vt |xt
1 )P (yt |xt ),
where
ck ? gk (xt 1 ) ? (xt xt 1 ? k )
P
(1
xt 1 ? 0)
k ck gk (xt 1 )) ? (xt
(3)
if vt = k
.
if vt = ;
P (xt , vt |xt 1 ) is the event-based transition kernel. (xt xt 1 ? k ) is 1 if the previous state
is xt 1 and the current state is xt = xt 1 + k , and 0 otherwise. k is the effect of event vk . ;
represents an auxiliary event, meaning that there is no event. Substituting the product form of gk , the
transition kernel can be written as follows:
Y (m) (m) Y
(m)
(m)
(m)
P (xt , vt = k|xt 1 ) = ck
gk (xt 1 ) ?
(xt
xt 1 ? k ),
(4)
m
P (xt , vt = ;|xt
(m)
(m)
where (xt
xt 1 ?
(m)
(m)
state is xt = xt 1 +
3.2
1)
= (1
X
ck
k
Y
m
(m) (m)
gk (xt 1 ))
m
(m)
k ) is 1 if the previous
(m)
k , and 0 otherwise.
?
Y
(m)
(xt
(m)
1
xt
m
? 0),
(m)
1
state of an individual m is xt
(5)
and the current
Variational Inference for Stochastic Kinetic Model
As noted in Section 2.1, exact inference in social dynamics is intractable due to the formidable state
space. However, we can approximate the posterior distribution P (x1,...,T , v1,...,T |y1,...,T ) using an
approximate distribution within the exponential family. The inference algorithm minimizes the KL
divergence between these two distributions, which can be formulated as an optimization problem [14]:
Minimize:
t,xt
Subject to:
X
??t (xt
1 , x t , vt )
1 ,xt ,vt
X
1 , x t , vt )
=
(m) (m)
?t (xt ),
for all
(6)
m
(m)
t, m, xt ,
(m)
}
1 ,{xt \xt
??t (xt
1 , x t , vt )
(m) (m)
1 (xt 1 ),
= ?t
(m)
1,
for all t, m, xt
(m)
1 \xt 1 },xt
vt ,{xt
X
??t (xt 1 , xt , vt )
P (xt , vt |xt 1 )P (yt |xt )
X Y (m) (m)
Y (m) (m)
?t (xt ) log
?t (xt )
t,xt m
??t (xt
vt ,xt
X
? log
(m)
?t
(m)
(xt
) = 1, for all t, m.
(m)
xt
The objective function is the Bethe free energy, composed of average energy and Bethe entropy
(m) (m)
approximation [23]. ??t (xt 1 , xt , vt ) is the approximate two-slice statistics and ?t (xt ) is the
approximate one-slice statistics for each individual
P m. They form the approximate distribution over
which to minimize the Bethe free energy. The t,xt 1 ,xt ,vt is an abbreviation for summing over
P
(m)
t, xt 1 , xt , and vt . {xt \x(m) } is the sum over all individuals in xt except xt . We use similar
t
abbreviations below. The first two sets of constraints are marginalization conditions, and the third
4
is normalization conditions. To solve this constrained optimization problem, we first define the
Lagrange function using Lagrange multipliers to weight constraints, then take the partial derivatives
(m) (m)
with respect to ??t (xt 1 , xt , vt ), and ?t (xt ). The dual problem is to find the approximate forward
(m) (m)
(m) (m)
statistics ?
? t 1 (xt 1 ) and backward statistics ?t (xt ) in order to maximize the pseudo-likelihood
function. The duality is between minimizing Bethe free energy and maximizing pseudo-likelihood.
The fixed-point solution for the primal problem is as follows1 :
1 X
Q
(m)
(m)
(m)
(m) Q
(m) (m) Q
(m)
(m)
??t (xt 1 , xt , vt ) =
P (xt ,vt |xt 1 )? m ?
? t 1 (xt 1 )? m P (yt |xt )? m ?t (xt ).
(7)
Zt
(m0 ) (m0 )
m0 6=m,xt
1
,xt
(m)
(m)
??t (xt 1 , xt , vt )
is the two-slice statistics for an individual m, and Zt is the normalization constant.
Given the factorized form of P (xt , vt |xt 1 ) in Eqs. (4) and (5), everything in Eq. (7) can be written
(m)
(m)
in a factorized form. After reformulating the term relevant to the individual m, ??t (xt 1 , xt , vt )
can be shown neatly as follows:
1 ? (m)
(m)
(m)
(m)
(m) (m)
(m) (m)
(m) (m)
??t (xt 1 , xt , vt ) =
P (xt , vt |xt 1 ) ? ?
? t 1 (xt 1 )P (yt |xt ) ?t (xt ),
(8)
Zt
(m)
(m)
where the marginalized transition kernel P? (xt , vt |xt 1 ) for the individual m can be defined as:
Y
(m)
(m)
(m) (m)
(m0 )
(m)
(m)
(m)
P? (xt , vt = k|xt 1 ) = ck gk (xt 1 )
g?k,t 1 ? (xt
xt 1 ? k ),
(9)
(m)
(m)
P? (xt , vt = ;|xt 1 ) = (1
X
m0 6=m
(m)
(m)
1)
ck gk (xt
Y
m0 6=m
k
(m0 )
1)
g?k,t
P (m0 ) (m0 )
(m0 )
(m0 ) (m0 )
(m0 )
(m0 )
(m0 )
(m0 )
?t 1 (xt 1 )P (yt
|xt
) t
(xt
)gk
(xt 1 )
1=
0
0
0
(m )
(m )
(m )
xt
x
?
t 1
k
g
?k,t
P
(m0 )
(m0 )
(m0 ) (m0 )
(m0 )
(m0 )
(m0 )
(m0 )
?t 1 (xt 1 )P (yt
|xt
) t
(xt
)gk
(xt 1 )
1=
(m0 )
(m0 )
xt
x
?0
t 1
g
?k,t
P
(m)
(xt
(m)
1
xt
? 0),
(10)
(m0 )
(m0 )
(m0 ) (m0 )
(m0 )
(m0 )
|xt
) t
(xt
)
1 (xt 1 )P (yt
0
(m )
x
?0
t 1
?t
(m0 )
xt
P
(m0 )
(m0 )
(m0 ) (m0 )
(m0 )
(m0 )
?t 1 (xt 1 )P (yt
|xt
) t
(xt
)
(m0 )
(m0 )
xt
x
?0
t 1
,
,
In the above equations, we consider the mean field effect by summing over the current and previous
states of all the other individuals m0 6= m. The marginalized transition kernel considers the probability
of event k on the individual m given the context of the temporal evolutions of the other individuals.
(m0 ) (m0 )
Comparing Eqs. (9) and (10) with Eqs. (4) and (5), instead of multiplying gk (xt 1 ) for individual
(m0 )
m0 6= m, we use the expected value of gk
with respect to the marginal probability distribution of
(m0 )
xt 1 .
Complexity Analysis: In our inference algorithm, the most computation-intensive step is the
marginalization in Eqs. (9)-(10). The complexity is O(M S 2 ), where M is the number of individuals and S is the state space of a single individual. The complexity of the entire algorithm is
therefore O(M S 2 T N ), where T is the number of time steps and N is the number of iterations until
convergence. As such, the complexity of our algorithm grows only linearly with the number of
individuals; it offers excellent scalability when the number of tracked individuals becomes large.
3.3 Parameter Learning
In order to learn the rate constant ck , we maximize the expected log likelihood. In a stochastic kinetic
model, the probability of a sample path is given in Eq. (3). The expected log likelihood over the
posterior probability conditioned on the observations y1 , . . . , yT takes the following form:
X
log P (x1,...,T , y1,...,T , v1,...,T ) =
??t (xt 1 , xt , vt ) ? log(P (xt , vt |xt 1 )P (yt |xt )).
t,xt
1 ,xt ,vt
??t (xt 1 , xt , vt ) is the approximate two-slice statistics defined in Eq. (6). Maximizing this expected
log likelihood by setting its partial derivative over the rate constants to 0 gives the maximum expected
log likelihood estimation of these rate constants.
P
P P
?
?
t,xt 1 ,xt ?t (xt 1 , xt , vt = k)
t
x
,x ?t (xt 1 , xt , vt = k)
ck = P
? P Q Pt 1 t (m) (m) (m) (m) . (11)
?
(m) ?t 1 (xt 1 )g
t,xt 1 ,xt ?t (xt 1 , xt , vt = ;)gk (xt 1 )
t
m
k (xt 1 )
x
t
1
1
The derivations for the optimization problem and its solution are shown in the Supplemental Material.
5
As such, the rate constant for event k is the expected number of times that this event has occurred
divided by the total expected number of times this event could have occurred.
To summarize, we provide the variational inference algorithm below.
Algorithm: Variational Inference with a Stochastic Kinetic Model
(m)
Given the observations yt
for k = 1, . . . , V .
(m)
for t = 1, . . . , T and m = 1, . . . , M , find xt
, vt and rate constants ck
Latent state inference. Iterate through the following forward and backward passes until convergence,
(m)
(m)
where P? (xt , vt |xt 1 ) is given by Eqs. (9) and (10).
(m)
? Forward pass. For t = 1, . . . , T and m = 1, . . . , M , update ?
?t
(m)
?
?t
(m)
(xt
(m)
(xt
) according to
1 X (m) (m) ? (m)
(m)
(m) (m)
?
? (x )P (xt , vt |xt 1 )P (yt |xt ).
Zt (m) t 1 t 1
)
xt
1 ,vt
(m) (m)
? Backward pass. For t = T, . . . , 1 and m = 1, . . . , M , update ?t 1 (xt 1 ) according to
1 X ?(m) (m) ? (m)
(m)
(m) (m)
(xt )P (xt , vt |xt 1 )P (yt |xt ).
Zt (m) t
?(m) (x(m) )
t 1
t 1
xt
,vt
Parameter estimation. Iterate through the latent state inference (above) and rate constants estimate
of ck according to Eq. (11), until convergence.
4
Experiments on Epidemic Applications
In this section, we evaluate the performance of variational inference with a stochastic kinetic model
(VISKM) algorithm of epidemic dynamics, with which we predict the transmission of diseases and
the health status of each individual based on proximity data collected from sensor networks.
4.1
Epidemic Dynamics
In epidemic dynamics, Gt = (M, Et ) is a dynamic network, where each node m 2 M is an
individual in the network, and Et = {(mi , mj )} is a set of edges in Gt representing that individuals
mi and mj have interacted at a specific time t. There are two possible hidden states for each
(m)
individual m at time t, xt 2 {0, 1}, where 0 indicates the susceptible state and 1 the infectious
(m)
state. yt
2 {0, 1} represents the presence or absence of symptoms for individual m at time t.
(m) (m)
P (yt |xt ) represents the observation probability. We define three types of events in epidemic
c1
applications: (1) A previously infectious individual recovers and becomes susceptible again: I !
S.
c2
(2) An infectious individual infects a susceptible individual in the network: S + I !
2I. (3) A
c3
susceptible individual in the network is infected by an outside infectious individual: S !
I. Based
on these events, the transition kernel can be defined as follows:
(m)
(m)
1 = 1) = c1 ,
(m)
(m)
P (xt = 0|xt 1 = 0) = (1
P (xt
= 0|xt
(m)
(m)
1
P (xt
= 1|xt
c3 )(1
Cm,t
c2 )
,
= 1) = 1
c1 ,
(m)
(m)
P (xt = 1|xt 1 = 0)
=1
(1
c3 )(1
c2 )Cm,t ,
P
(m0 )
where Cm,t =
? 1) is the number of possible infectious sources for
m0 :(m0 ,m)2Et (xt
individual m at time t. Intuitively, the probability of a susceptible individual becoming infected is 1
minus the probability that no infectious individuals (inside or outside the network) infected him. When
(m)
(m)
the probability of infection is very small, we can approximate P (xt = 1|xt 1 = 0) ? c3 +c2 ?Cm,t .
6
4.2
Experimental Results
Data Explanation: We employ two data sets of epidemic dynamics. The real data set is collected
from the Social Evolution experiment [5, 6]. This study records ?common cold? symptoms of 65
students living in a university residence hall from January 2009 to April 2009, tracking their locations
and proximities using mobile phones. In addition, the students took periodic surveys regarding their
health status and personal interactions. The synthetic data set was collected on the Dartmouth College
campus from April 2001 to June 2004, and contains the movement history of 13,888 individuals [16].
We synthesized disease transmission along a timeline using the popular susceptible-infectioussusceptible (SIS) epidemiology model [15], then applied the VISKM to calibrate performance. We
selected this data set because we want to demonstrate that our model works on data with a large
number of people over a long period of time.
Evaluation Metrics and Baseline Algorithms: We select the receiver operating characteristic
(ROC) curve as our performance metric because the discrimination thresholds of diseases vary. We
first compare the accuracy and efficiency of VISKM with Gibbs sampling (Gibbs) and particle
filtering (PF) on the Social Evolution data set [7, 8].2 Both Gibbs sampling and particle filtering
iteratively sample the infectious and susceptible latent state sequences and the infection and recovery
events conditioned on these state sequences. Gibbs-Prediction-10000 indicates 10,000 iterations of
Gibbs sampling with 1000 burn-in iterations for the prediction task. PF-Smoothing-1000 similarly
refers to 1000 iterations of particle filtering for the smoothing task. All experiments are performed on
the same computer.
Individual State Inference: We infer the probabilities of a hidden infectious state for each individual
at different times under different scenarios. There are three tasks: 1. Prediction: Given an individual?s
past health and current interaction patterns, we predict the current infectious latent state. Figure 2(a)
compares prediction performance among the different approximate inference methods. 2. Smoothing:
Given an individual?s interaction patterns and past health with missing periods, we infer the infectious
latent states during these missing periods. Figure 2(b) compares the performance of the three
inference methods. 3. Expansion: Given the health records of a portion (? 10%) of the population,
we estimate the individual infectious states of the entire population before medically inspecting
them. For example, given either a group of volunteers willing to report their symptoms or the
symptom data of patients who came to hospitals, we determine the probabilities that the people near
these individuals also became or will become infected. This information helps the government or
aid agencies to efficiently distribute limited medical resources to those most in need. Figure 2(c)
compares the performance of the different methods. From the above three graphs, we can see that all
three methods identify the infectious states in an accurate way. However, VISKM outperforms Gibbs
sampling and particle filtering in terms of area under the ROC curve for all three tasks. VISKM has
an advantage in the smoothing task because the backward pass helps to infer the missing states using
subsequent observations. In addition, the performance of Gibbs and PF improves as the number of
samples/particles increases.
Figure 2(d) shows the performance of the three tasks on the Dartmouth data set. We do not apply
the same comparison because it takes too much time for sampling. From the graph, we can see that
VISKM infers most of the infectious moments of individuals in an accurate way for a large social
system. In addition, the smoothing results are slightly better than the prediction results because we
can leverage observations from both directions. The expansion case is relatively poor, because we
use only very limited information to derive the results; however, even in this case the ROC curve has
good discriminating power to differentiate between infectious and susceptible individuals.
Collective Statistics Inference: After determining the individual results, we aggregate them to
approximate the total number of infected individuals in the social system as time evolves. This offers
a collective statistical summary of the spread of disease in one area as in traditional research, which
typically scales the sample statistics with respect to the sample ratio. Figures 2(e) and (f) show
that given 20% of the Social Evolution data and 10% of the Dartmouth data, VISKM estimates the
collective statistics better than the other methods.
Efficiency and Scalability: Table 1 shows the running time of different algorithms for the Social
Evolution data on the same computer. From the table, we can see that Gibbs sampling runs slightly
longer than PF, but they are in the same scale. However, VISKM requires much less computation time.
2
Code and data are available at http://cse.buffalo.edu/~wendong/.
7
1
1
0.9
0.9
0.8
0.8
0.8
0.7
0.6
0.5
0.4
VISKM?Prediction
PF?Prediction?10000
PF?Prediction?1000
Gibbs?Prediction?10000
Gibbs?Prediction?1000
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.6
0.5
0.4
VISKM?Smoothing
PF?Smoothing?10000
PF?Smoothing?1000
Gibbs?Smoothing?10000
Gibbs?Smoothing?1000
0.3
0.2
0.1
0
1
0
0.1
0.2
0.3
(a) Prediction
0.6
1
45
0.9
40
Number of Patients
0.6
0.5
0.4
0.3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.9
0.4
VISKM?Expansion
PF?Expansion?10000
PF?Expansion?1000
Gibbs?Expansion?10000
Gibbs?Expansion?1000
0.3
0.1
0
1
0
0.1
0.2
0.8
0.9
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
False Positive Rate
(c) Expansion
30
25
20
15
150
Real Number
VISKM?Aggregation
Scaling
100
50
10
VISKM?Prediction
VISKM?Smoothing
VISKM?Expansion
0.1
0.8
Real Number
VISKM?Aggregation
PF?10000
Gibbs?10000
Scaling
35
0.7
0.2
0.7
0.5
(b) Smoothing
0.8
True Positive Rate
0.5
0.6
False Positive Rate
False Positive Rate
0
0.4
0.7
0.2
Number of Patients
0.3
0.7
True Positive Rate
True Positive Rate
True Positive Rate
1
0.9
5
1
False Positive Rate
0
0
0
20
40
60
80
100
120
140
0
500
(d) Dartmouth
(e) Social Evolution Statistics
1000
1500
2000
2500
3000
Time Sequence
Time Sequence
(f) Dartmouth Statistics
Figure 2: Experimental results. (a-c) show the prediction, smoothing, and expansion performance
comparisons for Social Evolution data, while (d) shows performance of the three tasks for Dartmouth
data. (e-f) represent the statistical inferences for both data sets.
Table 1: Running time for different approximate inference algorithms. Gibbs_10000 refers to Gibbs
sampling for 10,000 iterations, and PF_1000 to particle filtering for 1000 iterations. Other entries
follow the same pattern. All times are measured in seconds.
60 People
30 People
15 People
VISKM
0.78
0.39
0.19
Gibbs_1000
771
255
101
Gibbs_10000
7820
2556
1003
PF_1000
601
166
122
PF_10000
6100
1888
1435
In addition, the computation time of VISKM grows linearly with the number of individuals, which
validates the complexity analysis in Section 3.2. Thus, it offers excellent scalability for large social
systems. In comparison, Gibbs sampling and PF grow super linearly with the number of individuals,
and roughly linearly with the number of samples.
Summary: Our proposed VISKM achieves higher accuracy in terms of area under ROC curve
and collective statistics than Gibbs sampling or particle filtering (within 10,000 iterations). More
importantly, VISKM is more efficient than sampling with much less computation time. Additionally,
the computation time of VISKM grows linearly with the number of individuals, demonstrating its
excellent scalability for large social systems.
5
Conclusions
In this paper, we leverage sensor network and social network data to capture temporal evolution in
social dynamics and infer individual behaviors. In order to define the adaptive transition kernel, we
introduce a stochastic dynamic mode that captures the dynamics of complex interactions. In addition,
in order to make tractable inferences we propose a variational inference algorithm the computation
complexity of which grows linearly with the number of individuals. Large-scale experiments on
epidemic dynamics demonstrate that our method effectively captures the evolution of social dynamics
and accurately infers individual behaviors. More accurate collective effects can be also derived
through the aggregated results. Potential applications for our algorithm include the dynamics of
emotion, opinion, rumor, collaboration, and friendship.
8
References
[1] Adam Arkin, John Ross, and Harley H McAdams. Stochastic kinetic analysis of developmental
pathway bifurcation in phage -infected escherichia coli cells. Genetics, 149(4):1633?1648,
1998. 1
[2] Matthew Brand, Nuria Oliver, and Alex Pentland. Coupled hidden markov models for complex
action recognition. In Proc. of CVPR, pages 994?999, 1997. 1
[3] Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics.
Reviews of modern physics, 81(2):591, 2009. 1
[4] Ido Cohn, Tal El-Hay, Nir Friedman, and Raz Kupferman. Mean field variational approximation
for continuous-time bayesian networks. The Journal of Machine Learning Research, 11:2745?
2783, 2010. 1
[5] Wen Dong, Katherine Heller, and Alex Sandy Pentland. Modeling infection with multi-agent
dynamics. In International Conference on Social Computing, Behavioral-Cultural Modeling,
and Prediction, pages 172?179. Springer, 2012. 4.2
[6] Wen Dong, Bruno Lepri, and Alex Sandy Pentland. Modeling the co-evolution of behaviors
and social relationships using mobile phone data. In Proc. of the 10th International Conference
on Mobile and Ubiquitous Multimedia, pages 134?143. ACM, 2011. 4.2
[7] Wen Dong, Alex Pentland, and Katherine A Heller. Graph-coupled hmms for modeling the
spread of infection. In Proc. of UAI, pages 227?236, 2012. 4.2
[8] Arnaud Doucet and Adam M Johansen. A tutorial on particle filtering and smoothing: Fifteen
years later. Handbook of Nonlinear Filtering, 12(656-704):3, 2009. 4.2
[9] Steven N Durlauf and H Peyton Young. Social dynamics, volume 4. MIT Press, 2004. 1
[10] Stephen Eubank, Hasan Guclu, VS Anil Kumar, Madhav V Marathe, Aravind Srinivasan, Zoltan
Toroczkai, and Nan Wang. Modelling disease outbreaks in realistic urban social networks.
Nature, 429(6988):180?184, 2004. 1
[11] Daniel T Gillespie. Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem.,
58:35?55, 2007. 1
[12] Andrew Golightly and Darren J Wilkinson. Bayesian parameter inference for stochastic
biochemical network models using particle markov chain monte carlo. Interface focus, 2011. 1
[13] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature
propagation in social networks. In Proc. of ICML, pages 275?283, 2013. 1
[14] Tom Heskes and Onno Zoeter. Expectation propagation for approximate inference in dynamic
bayesian networks. In Proc. of UAI, pages 216?223, 2002. 1, 3.2
[15] Matt J Keeling and Pejman Rohani. Modeling infectious diseases in humans and animals.
Princeton University Press, 2008. 4.2
[16] David Kotz, Tristan Henderson, Ilya Abyzov, and Jihwang Yeo. CRAWDAD data set dartmouth/campus (v. 2007-02-08). Downloaded from http://crawdad.org/dartmouth/campus/, 2007.
4.2
[17] Kevin Murphy and Stuart Russell. Rao-blackwellised particle filtering for dynamic bayesian
networks. In Sequential Monte Carlo methods in practice, pages 499?515. Springer, 2001. 1
[18] Uri Nodelman, Christian R Shelton, and Daphne Koller. Continuous time bayesian networks.
In Proc. of UAI, pages 378?387. Morgan Kaufmann Publishers Inc., 2002. 1, 2.2
[19] Manfred Opper and Guido Sanguinetti. Variational inference for markov jump processes. In
Proc. of NIPS, pages 1105?1112, 2008. 1
[20] V. Rao and Y. W. Teh. Fast MCMC sampling for markov jump processes and continuous time
bayesian networks. In Proc. of UAI, 2011. 1
[21] Joshua W Robinson and Alexander J Hartemink. Learning non-stationary dynamic bayesian
networks. The Journal of Machine Learning Research, 11:3647?3680, 2010. 1
[22] Darren J Wilkinson. Stochastic modeling for systems biology. CRC press, 2011. 1, 2.2, 2.2
[23] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Understanding belief propagation and
its generalizations. Exploring artificial intelligence in the new millennium, 8:236?239, 2003.
3.2
9
| 6453 |@word briefly:1 willing:1 simulation:1 fifteen:1 minus:1 moment:1 contains:2 daniel:1 past:2 reaction:7 outperforms:1 current:6 comparing:1 si:1 peyton:1 must:2 written:2 john:1 realistic:1 subsequent:1 christian:1 update:3 discrimination:1 v:1 stationary:1 selected:2 intelligence:1 record:2 manfred:1 santo:1 node:1 location:1 cse:1 org:1 daphne:1 blackwellized:1 along:2 c2:4 become:1 pathway:1 behavioral:1 inside:1 introduce:9 expected:7 rapid:1 roughly:1 p1:1 behavior:8 nuria:1 multi:1 freeman:1 resolve:1 election:2 pf:12 becomes:2 spain:1 campus:3 cultural:1 formidable:1 factorized:2 follows1:1 cm:4 minimizes:1 heaukulani:1 supplemental:1 temporal:9 pseudo:2 y3:6 blackwellised:1 growth:1 rm:4 qm:1 unit:1 medical:2 before:1 positive:8 engineering:1 local:1 treat:1 path:2 becoming:1 burn:1 collect:1 escherichia:1 co:1 hmms:5 limited:3 factorization:1 practice:2 x3:6 cold:1 area:3 empirical:2 refers:2 zoubin:1 risk:1 applying:2 context:1 vittorio:1 yt:28 maximizing:2 missing:3 survey:1 recovery:1 importantly:1 population:7 handle:1 classic:1 traditionally:1 target:1 trigger:1 pt:1 exact:3 guido:1 arkin:1 recognition:1 steven:1 wang:1 capture:6 thousand:1 wendong:2 movement:2 russell:2 disease:9 agency:1 developmental:1 complexity:9 wilkinson:2 dynamic:42 personal:1 efficiency:3 joint:1 rumor:2 derivation:1 distinct:1 fast:1 describe:1 monte:2 artificial:1 tell:1 aggregate:1 kevin:1 outside:2 whose:1 widely:1 solve:1 cvpr:1 otherwise:2 epidemic:13 statistic:16 validates:1 mcadams:1 differentiate:1 advantage:2 sequence:8 unprecedented:2 took:1 propose:3 interaction:18 product:4 remainder:1 relevant:1 loreto:1 infectious:17 validate:1 scalability:4 exploiting:1 convergence:3 interacted:1 transmission:6 r1:1 adam:2 help:2 derive:2 andrew:1 measured:1 crawdad:2 eq:11 auxiliary:1 involves:1 direction:1 stochastic:21 human:1 opinion:4 material:1 everything:1 crc:1 require:1 government:1 generalization:1 zoltan:1 inspecting:1 exploring:1 kinetics:1 proximity:3 hall:1 predict:6 matthew:1 substituting:1 m0:50 major:1 vary:1 adopt:1 achieves:1 sandy:2 estimation:2 proc:8 applicable:1 ross:1 him:1 city:1 successfully:1 mit:1 sensor:6 aim:1 super:1 rather:1 ck:16 claudio:1 mobile:3 derived:1 focus:3 emission:2 june:1 vk:1 modelling:1 likelihood:9 indicates:2 hk:4 baseline:1 inference:34 dependent:1 el:2 biochemical:1 entire:5 typically:2 hidden:15 mth:2 koller:1 issue:1 among:5 dual:1 animal:1 smoothing:15 constrained:1 bifurcation:1 marginal:1 field:6 emotion:1 sampling:16 biology:2 represents:5 stuart:1 icml:1 report:1 primarily:3 wen:4 employ:1 modern:1 composed:1 simultaneously:1 divergence:1 individual:72 murphy:2 william:1 harley:1 chmm:4 friedman:1 guclu:1 evaluation:1 henderson:1 primal:1 chain:2 accurate:3 oliver:1 edge:1 capable:1 partial:2 necessary:1 conduct:1 phage:1 abundant:1 reactant:4 modeling:8 rao:4 infected:7 calibrate:1 entry:1 too:1 characterize:2 answer:1 periodic:1 ido:1 synthetic:1 epidemiology:1 international:2 discriminating:1 probabilistic:1 dong:4 physic:2 ilya:1 again:1 slowly:1 coli:1 derivative:2 yeo:1 potential:2 distribute:1 chemistry:1 student:2 availability:1 infects:1 inc:1 depends:1 vi:2 performed:2 later:1 portion:1 zoeter:1 aggregation:2 contribution:1 minimize:2 accuracy:4 became:1 kaufmann:1 characteristic:1 efficiently:2 who:1 identify:1 bayesian:10 eubank:1 accurately:1 produced:1 carlo:2 multiplying:1 history:2 phys:1 infection:8 nonetheless:2 energy:4 involved:3 mi:2 recovers:1 popular:1 dimensionality:1 improves:1 organized:1 infers:2 ubiquitous:1 aravind:1 focusing:1 higher:1 dt:2 follow:1 tom:1 wei:1 april:2 symptom:4 until:3 cohn:2 nonlinear:1 propagation:5 defines:1 mode:1 grows:8 effect:6 matt:1 y2:6 multiplier:1 true:4 evolution:16 chemical:5 reformulating:1 arnaud:1 iteratively:1 castellano:1 during:3 onno:1 noted:1 demonstrate:6 interface:1 meaning:1 variational:16 common:1 tracked:1 exponentially:3 volume:1 occurred:2 illness:1 synthesized:1 gibbs:19 cv:1 pm:3 similarly:1 heskes:1 particle:12 neatly:1 bruno:1 longer:1 operating:1 gt:2 posterior:2 perspective:1 driven:1 phone:2 scenario:2 hay:2 certain:1 binary:1 came:1 vt:45 meeting:1 joshua:1 morgan:1 determine:1 maximize:2 period:3 aggregated:1 living:1 stephen:1 mix:1 infer:4 offer:6 long:1 divided:1 prediction:15 patient:3 expectation:2 metric:2 volunteer:1 iteration:7 sometimes:1 kernel:8 normalization:2 represent:1 cell:2 c1:4 background:1 addition:5 want:1 interval:1 grow:1 source:1 macroscopic:1 hasan:1 publisher:1 unlike:1 pass:1 subject:2 near:1 presence:1 leverage:2 concerned:2 iterate:2 marginalization:2 dartmouth:8 regarding:1 raz:1 consumed:1 intensive:1 action:1 collision:1 ten:1 http:2 tutorial:1 track:2 discrete:7 kupferman:1 express:1 group:3 srinivasan:1 threshold:1 demonstrating:1 achieving:2 urban:1 changing:1 backward:7 v1:4 graph:3 year:2 sum:1 run:1 parameterized:1 family:2 kotz:1 residence:1 scaling:2 capturing:2 hi:1 nan:1 ahead:1 occur:3 constraint:2 alex:4 x2:7 tal:1 skm:3 medically:1 kumar:1 relatively:1 department:1 according:4 tristan:1 poor:1 describes:1 slightly:2 evolves:1 rev:1 happens:1 outbreak:2 intuitively:1 resource:2 equation:1 previously:1 describing:1 know:1 tractable:5 end:1 available:1 yedidia:1 apply:4 alternative:1 yair:1 running:2 include:2 opportunity:3 marginalized:2 ghahramani:1 sweep:1 objective:1 question:1 costly:1 traditional:3 hmm:10 consumption:1 collected:5 considers:1 code:1 relationship:1 insufficient:1 illustration:1 minimizing:1 ratio:1 difficult:1 susceptible:10 katherine:2 gk:16 fortunato:1 design:2 collective:7 zt:5 teh:2 observation:10 markov:14 buffalo:3 pentland:4 january:1 y1:17 david:1 kl:1 c3:4 johansen:1 barcelona:1 timeline:1 nip:2 robinson:1 below:2 pattern:4 xm:1 challenge:1 summarize:1 explanation:1 belief:1 power:1 event:34 gillespie:1 representing:1 improve:1 golightly:1 millennium:1 concludes:1 zhen:1 extract:1 coupled:7 health:5 nir:1 review:2 literature:1 heller:2 understanding:1 determining:1 catalyst:1 nodelman:1 filtering:11 downloaded:1 agent:1 creighton:1 principle:2 collaboration:1 production:1 ebola:1 genetics:1 summary:2 wireless:1 free:3 taking:1 distributed:1 slice:6 curve:4 opper:2 transition:15 valid:1 forward:6 adaptive:3 jump:3 social:40 cope:2 approximate:13 status:2 doucet:1 uai:4 handbook:1 summing:2 receiver:1 marathe:1 sanguinetti:2 continuous:4 latent:6 table:3 additionally:1 bethe:4 learn:1 molecule:4 mj:2 nature:1 expansion:10 excellent:3 complex:2 spread:2 linearly:8 xu:1 x1:12 join:1 roc:4 aid:1 wish:1 exponential:1 third:2 young:1 anil:1 annu:1 friendship:1 specific:3 xt:225 intractable:2 false:4 sequential:1 effectively:1 ci:4 linearization:1 conditioned:2 uri:1 entropy:1 srihari:2 sargur:1 lagrange:2 hartemink:1 tracking:1 springer:2 darren:2 acm:1 kinetic:17 abbreviation:2 formulated:1 absence:1 considerable:1 change:2 determined:1 except:1 sampler:1 multimedia:1 called:3 specie:5 pas:5 total:3 duality:1 experimental:2 hospital:1 brand:1 select:1 college:1 people:7 chem:1 jonathan:1 alexander:1 evaluate:2 mcmc:1 princeton:1 tested:1 shelton:1 |
6,029 | 6,454 | A Non-parametric Learning Method for Confidently
Estimating Patient?s Clinical State and Dynamics
William Hoiles
Department of Electrical Engineering
University of California Los Angeles
Los Angeles, CA 90024
whoiles@ucla.edu
Mihaela van der Schaar
Department of Electrical Engineering
University of California Los Angeles
Los Angeles, CA 90024
mihaela@ee.ucla.edu
Abstract
Estimating patient?s clinical state from multiple concurrent physiological streams
plays an important role in determining if a therapeutic intervention is necessary and
for triaging patients in the hospital. In this paper we construct a non-parametric
learning algorithm to estimate the clinical state of a patient. The algorithm addresses several known challenges with clinical state estimation such as eliminating
the bias introduced by therapeutic intervention censoring, increasing the timeliness
of state estimation while ensuring a sufficient accuracy, and the ability to detect
anomalous clinical states. These benefits are obtained by combining the tools of
non-parametric Bayesian inference, permutation testing, and generalizations of the
empirical Bernstein inequality. The algorithm is validated using real-world data
from a cancer ward in a large academic hospital.
1
Introduction
Timely clinical state estimation can significantly improve the quality of care for patient?s by informing
clinicians of patient?s that have entered a high-risk clinical state. This is a challenging problem as the
patient?s clinical state is not directly observable and must be inferred from the patient?s vital signs
and the clinician?s domain-knowledge. Several methods exist for estimating the patient?s clinical
state including clinical guidelines and risk scores [21, 18]. The limitation with these population
based methods is that they are not personalized (e.g. patient models are not unique), can not
detect anomalous patient dynamics, and most importantly, are biased due to therapeutic intervention
censoring [16]. Therapeutic intervention censoring occurs when a patient?s physiological signals are
misclassified in the training data as a result of the effects caused by therapeutic interventions. To
improve the quality of patient care, new methods are needed to overcome these limitations.
In this paper we develop an algorithm for estimating a patient?s clinical state based on previously
recorded electronic health record (EHR) data. A schematic of the algorithm is provided in Fig.1 which
contains three primary components: a) learning the patient?s stochastic model, b) using statistical
techniques to evaluate the quality of the estimated stochastic model, and c) performing clinical state
estimation for new patients based on their estimated models. The works by Fox et al. [10, 9] and
Saria et al. [19] for temporal segmentation are the most related to our algorithm. However [10, 19]
do not apply formal statistical techniques to validate and iteratively update the hyper-parameters
of the non-parametric Bayesian inference, are not personalized, do not remove the bias caused
by therapeutic intervention censoring, and do not utilize clinician domain knowledge for clinical
state estimation. Additionally, applying fully Bayesian methods [9] for clinical state estimation are
computationally prohibitive as the computational complexity of constructing the stochastic model of
all patients grows polynomially with the number of samples and maximum number of possible states
of all patients. The computational complexity of our algorithm is only polynomial in the number
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of samples and states of a single patient. A detailed literature review is provided in the Supporting
Material.
The proposed algorithm (Fig.1) learns a combinatorial stochastic model for each patient based on
their measured vital signs. A non-parametric Bayesian learning algorithm based on the hierarchical
Dirichlet process hidden Markov model (HDP-HMM) [10] is used to learn the patient?s stochastic
model which is composed of a possibly infinite state-space HMM where each state is associated with
a unique dynamic model. The algorithm dynamically adjusts the number of detected dynamic models
and their temporal duration based on the patient?s vital signs?that is, the algorithm has a data-driven
bound on the model complexity (e.g. number of detected states). The patient?s stochastic model
provides a fine-grained personalized representation of each patient that is interpretable for clinicians,
and accounts for the patient?s specific dynamics which may result from therapeutic interventions and
medical complications (e.g. disease, paradoxical reaction to a drug, bone fracture). To ensure that
each detected dynamic model is associated with a unique clinical state, the hyper-parameters in the
HPD-HMM are updated iteratively using the results from an improved Bonferroni method [2]. This
mitigates the major weakness of non-parametric Bayesian inference methods of how to select the
hyper-parameters [14, 12]. Additionally, the algorithm provides statistical guarantees on the dynamic
model parameters using generalizations of the scalar Bernstein inequality [13] to vector-valued
and matrix-valued random variables. In clinical applications it is desirable to relate a collection of
dynamic models from several patient?s to a unique clinical state of interest for the clinician (e.g.
detecting which patients have entered a high-risk clinical state). The clinician defines a supervised
training set that is composed of all previously observed patient?s dynamic models and their associated
clinical state, which is then used to construct a similarity metric. This construction of the similarity
metric between dynamic models and clinical states ensures that the bias introduced from therapeutic
intervention censoring is removed, and also allows for the detection of anomalous dynamic models
that are not associated with a previously defined clinical state. When a new patient arrives the
algorithm will learn their stochastic model, and then use the similarity metric to map the detected
dynamic models to their associated clinical states of interest.
Though our algorithm is general and can be applied in several medical settings (e.g. mobile health,
wireless health) here we focus on detecting the clinical state of patients in hospital wards. Specifically
we apply our algorithm to patient?s in a cancer ward of a large academic hospital.
Model and Parameters
Electronic Health
Records D
Segmentation
fine-grained
personalization
New Patient
Vitals {y t }t?T
Segmentation
fine-grained
personalization
Offline Learning
no
Validation
Clinician
yes
?
D
Label
L
Similarity
Clinical State
Estimate
Figure 1: Schematic of the proposed algorithm for learning the dynamic model and estimating the
? is constructed and provided to the
clinical state of the patient. From D a valid segmentation D
clinician to construct the labeled dataset L. New patient vital signs are labeled using the dataset L.
2
Non-parametric Learning Algorithm for Patient?s Stochastic Model
In this section we provide a method to segment patient?s electronic health record data D =
{{yti }t?T i }i?I , with yti ? Rm the vital signs of patient i ? I at time t. To segment the temporal data we assume that the vital signs of each patient originate from a switching multivariate
Gaussian (SMG) process. A Bayesian non-parametric learning algorithm is utilized to select the
switching times between the unique dynamic models?that is, we consider the observation dynamics
and model switching dynamics simultaneously. The final result of the segmentation is the dataset:
? = {{yti }t?T i , k ? {1, . . . , K i } = Ki }i?I
D
(1)
k
2
with Tki the time samples for segment k and Ki the set of segments for patient i. Statistical methods
are used to ensure that each dynamic model is associated with a unique clinical state, refer to Sec.3
for details.
We assume that the switching process between models satisfies a HMM where each state of the HMM
is associated with a unique dynamic model given by:
y t = ?t (zt )
?t (zt ) ? N (?(zt ), ?(zt ))
(2)
i
where zt ? K is the state of the patient, and ?t (zt ) is a Gaussian white noise term with covariance
matrix ?( zt ). For notational convenience we will suppress the indices i and only include explicitly
when required. For segmentation each of the patients is treated independently. Each state zt is assumed
to evolve according to a HMM with zt associated with a specific segment k ? K. Notice that we
must estimate the total number of states |K|, and the associated model parameters {?(k), ?(k)}k?K
using only the data {yt }t?T .
To learn the cardinality of the HMM we use the tools of non-parametric Bayesian inference by placing
a prior on the HMM parameters to allow a data-driven estimation of cardinality of the state-space.
Recall that non-parametric here indicates that for larger sample size T , the number of possible states
(i.e. dynamic models) can also increase. To model the infinite-HMM we use the hierarchical Dirichlet
process (HDP) [3, 22]. The HDP can be interpreted as a HMM with a countably infinite state-space.
That is, the HDP is a non-parametric prior for the infinite-HMM. The main idea of the HDP is to
link a countably infinite set of Dirichlet processes by sharing atoms among the DPs with each DP
associated with a specific state. The stick-breaking construction of the HDP is given by [8, 22]:
m ? H,
?0 =
?
X
?m ? m ,
?m = vm
m=1
?k =
?
X
?km ?m ,
m?1
Y
(1 ? vl ),
vm ? Beta(1, ?),
l=1
? k ? DP(?, ?).
(3)
m=1
Eq.(3) represents an infinite state HMM with ?km the transition probability of transitioning from
state k ? K to state m ? K. ? k represents the transition probabilities out of state k of the HMM with
? the shared prior parameter of the transition distribution, H is a prior on the transition probability
distribution, and ? the concentration of the transition probability distribution of the HMM.
The patient?s stochastic model is constructed by combining the SMG (2) with the HDP (or infinite
HMM) and is given by:
k?1
Y
?? + ??k
vk ? Beta(1, ?), ?k = vk
(1 ? vl ), ?k ? DP ? + ?,
k = 1, 2, . . .
?+?
l=1
zt ? ?(?|zt?1 ) = ? zt?1 , y t = ?(zt ) t = 1, 2, . . . , T.
(4)
The parameter ? controls how concentrated the state transition function is from state k to state k 0 .
This can be seen by setting ? = 0 and ? = 0 such that E[? k ] = ?. If ? = 1 then the parameter
?k in ? decays at approximately a geometric rate for increasing k. As ? increases, the decay of the
elements in ? decrease. For ? > 0 and ? > 0 then E[? k ] = (?? + ??k )/(? + ?), as such ? controls
the bias of ? k towards self-transitions?that is, ?(k|k) is given a large weight. The parameter ? + ?
controls the variability of ? k and the base state transition distribution (?? + ??k )/(? + ?).
Given the patient?s stochastic model (4), non-parametric Bayesian inference are utilized to estimate
the model parameters from the patient?s vital signs {y t }t?T . To utilize Bayesian inference we define
a prior and compute the associated posterior since a ?-finite density measure is present. The prior
distributions on ? and ? are given by:
? ? Dir(?/L, . . . , ?/L), ? k ? Dir(??1 , . . . , ??k + ?, . . . , ?L ) k ? {1, . . . , L}.
(5)
Eq.(5) is the weak limit approximation with truncation level L where L is the largest number of
expected states in the estimated HMM from {y t }t?T [25]. Note that as L ? ? then (5) approach
the HDP. If clinician domain knowledge is not available on the initial hyper-parameters ?, ?, and ?,
then it is common to place Beta or Gamma priors on these distributions [25]. For the multivariate
Gaussian we utilize the Normal-Inverse-Wishart prior distribution [11]:
1
v+m+1
?
p(?, ?|?0 , ?, S0 , v) ? |?| 2 exp ? tr(vS0 ??1 ? (? ? ?0 )0 ??1 (? ? ?0 ))
(6)
2
2
3
where v and S0 are the degrees of freedom and the scale matrix for the inverse-Wishart distribution
on ?, ?0 is the prior mean, and ? is the number of prior measurements on the ? scale. Given the
prior distribution with associated posterior distributions a MCMC or variational sampler (i.e. Gibbs
sampler [10], Beam sampler [25], variational Bayes [6, 7]) can be utilized to estimate the parameters
of the patient?s stochastic model (4) given the data {y t }t?T .
3
Statistical Methods to Evaluate Stochastic Model Quality
? (1) generated from all the patient?s estimated stochastic models (4),
Given the segmented dataset D
? This includes testing if the vital signs
this section presents methods to evaluate the quality of D.
i
{yt }t?Tki for each patient and unique dynamic model are consistent with a multivariate Gaussian
distribution, contain sufficient samples to guarantee the accuracy of the dynamic model parameters,
and that the detected dynamic models for each patient are unique. If the estimated stochastic models
are of low quality then the hyper-parameters of the non-parametric Bayesian inference algorithm
can be iteratively updated to ensure that all the patient?s stochastic models accurately represent their
dynamics. This is a vital step in medical applications since the results of the non-parametric Bayesian
inference algorithm are sensitive to the selected hyper-parameters [14, 12]. For example Fig.2(a)
illustrates a poor quality segmentation that results from poorly selected hyper-parameters.
3.1
Hypothesis Tests for Model Consistency with Segments
? is consistent with a multivariate
To ensure model consistency we must test if each segment in D
Gaussian process (i.e. samples are independent and normally distributed). To test if the segment
? contains independent samples we evaluate the autocorrelation function (ACF) [5]
{y t }t?Tk ? D
for each segment. For {y t }t?Tk the ACF must exponentially decay to zero which indicates that
the segment contains independent samples. Note that it is possible for a spurious autocorrelation
structure to be present in the segment if the segment is composed of a mixture of Gaussian processes.
If this is suspected then the hyper-parameters of the non-parametric Bayesian inference algorithm are
updated to increase the number of segments (for example by increasing L or decreasing ?). Since
there is no universally most powerful test for multivariate normality, we use the improved Bonferroni
method [23] which contains four affine invariant hypothesis test statistics elevating the need to select
the most sensitive single test while retaining the benefits of the these four multivariate normality tests.
3.2
Data-Driven Confidence Bounds for Dynamic Model Estimation
? is that each segment
An important consideration when evaluating the quality of the segmentation D
contains sufficient samples to confidently estimate the mean and covariance {?, ?} of the SMG
model. This is particularly important in medical applications as it provides an estimate of the
maximum number of samples needed to confidently estimate {?, ?} which are used to estimate
the clinical state of the patient. Note that the estimated posterior distribution for {?, ?} can not be
used to bound the number of samples required. To estimate {?, ?} given {y t }t?Tk , the maximum
likelihood estimators given by:
?
?(k)
=
nk
1 X
yt ,
nk t=1
nk
1 X
0
?
?
?
?(k)
=
(y ? ?(k))(y
t ? ?(k))
nk t=1 t
(7)
are used with nk = |Tk | is the total number of samples in segment k ? K. If each vital sign is
independent (i.e. spherical multivariate Gaussian distribution) then an empirical Bernstein bound [13]
? and the actual mean ?. From the
can be constructed to estimate the error between the sample mean ?
empirical Bernstein bound, the minimum number of samples necessary to ensure that P (?
?(k, j) ?
?(k, j) ? ?) ? ? for all segments k ? K and streams j ? {1, . . . , m} for some confidence level
? > 0 and tolerance ? ? 0 is given by:
n(?, ?) ?
6? 2
max
+ 2?max ?
1
ln( )
3?2
?
(8)
2
with ?max
the maximum possible variance and ?max the maximum possible difference between the
maximum and minimum values of all values in the vital sign data.
4
? ? Rm , and a bound on the sample covariance
To construct a relaxed bound on the sample mean ?
m?m
? ?R
?
computed using (7), we generalize the empirical Bernstein bound to the multidimensional
case. The goal is to construct a bound of the form P (||Z|| ? ?) ? ? where || ? || denotes the spectral
norm if Z is a matrix, or the 2-norm in the case Z is a vector. To construct a probabilistic bound on
the accuracy of the estimated mean we utilize the vector Bernstein inequality given by Theorem 1.
Theorem 1 Let {Y1 , . . . , Yn } be a set of independent random vectors with Yt ? Rm for t ?
{1, . . . , n}. Assume that P
each vector has uniform bounded deviation such that ||Yt || ? L ?t ?
n
{1, . . . , n}. Writing Z = t=1 Yt , then
n
X
?3?2
, V (Z) =
E[||Yt ||22 ].
(9)
P (||Z|| ? ?) ? (2m) exp
6V (Z) + 2L?
t=1
The proof of Theorem 1 is provided in the Supporting Material. To construct the bound on the number
? ? ? with Yt = (y t ? ?)/n. Using the
of samples necessary to estimate the mean we define Z = ?
triangle inequality, Jensen?s inequality, and assuming ||y t ||2 ? B1 for some constant B1 , we have
that:
2B1
1 2
L?
,
V (Z) ?
B1 ? ||?||22 .
(10)
n
n
Plugging (10) into (9) results in the minimum number of samples necessary to guarantee that
? ? ?|| ? ?) ? ? with the number of samples n(?, ?) given by:
P (||?
6(B 2 ? ||?||2 ) + 4B ?
2m
1
1
2
n(?, ?) ?
ln(
).
(11)
3?2
?
To bound the number of samples necessary to estimate ? we utilize the corollary of Theorem 1
? ? ?. The bound on the number of samples necessary to
for real-symmetric matrices with Z = ?
?
? ? B2 , is given by:
guarantee P (||? ? ?|| ? ?) ? ?, assuming ||?|| ? ||y t ? ?||
6B 2 + 4B ?
2m
2
2
n(?, ?) ?
).
(12)
ln(
2
3?
?
For a given ? and ?, and an estimate of the maximum spectral norm of ? and norm of ?, equations
(11) and (12) can be used to estimate the minimum number of samples necessary to sufficiently
estimate {?, ?}. To accurately compute the clinical state from the unique dynamic model, each
segment must satisfy (11) and (12), otherwise any clinical state estimation may give unreliable results.
3.3
Statistical Tests for Statistically Identical Dynamic Models
In this section we construct a novel hypothesis test for mean and covariance equality with a given
confidence, and design parameters that control the importance of the mean equality compared to
the covariance equality. The hypothesis test both evaluates the quality of the estimated stochastic
model, but can also be used to merge statistically identical segments to increase the accuracy of
the dynamic model parameter estimates. Given two segments of vital signs, each associated with a
supposedly unique dynamic model, we define the null hypothesis H0 as the equality of the mean and
covariance matrices from the two dynamic models, and the alternate hypothesis H1 that either the
mean or covariance are not equal. Formally:
H0 : ?(k) = ?(k 0 ) and ?(k) = ?(k 0 ),
H1 : ?(k) 6= ?(k 0 ) or ?(k) 6= ?(k 0 ).
(13)
Several methods exist for testing for covariance equality [20] and for mean equality [24], however
we wish to test for both covariance and location equality. To test for the global hypothesis H0 in (13),
note that H0 and H1 can equivalently be stated as a combination of the sub-hypothesis as follows:
H0 : H01 ? H02
and
H1 : H11 ? H12
(14)
with H01 : ?(k) = ?(k 0 ), H11 : ?(k) 6= ?(k 0 ), H02 : ?(k) = ?(k 0 ), and H12 : ?(k) 6= ?(k 0 ). To
construct the hypothesis test for H0 the non-parametric the permutation testing method [17] is used
which allows us to combine the sub-hypothesis tests for covariance and mean equality to construct a
hypothesis test for H0 .
To test for the null hypothesis H01 we utilize Hotelling?s T 2 test as it is asymptotically the most
powerful invariant test when the data associated with k and k 0 are normally distributed [4]. Given that
5
y t are generated from a multivariate normal distribution, the test statistic ? 1 follows a T 2 distribution
such that ? 1 ? T 2 (m, n(k)+n(k 0 )?2) where n(k) and n(k 0 ) are the number of samples in segments
k and k 0 respectively. To test for the null hypothesis H02 we utilize the modified likelihood ratio
statistic provided by Bartlett [1], written ?? , which is uniformly the most power unbiased test for
covariance equality [15]. The test statistic for covariance equality is given by:
? 2 = ?2? log(?? ),
?=1?
2m2 + 3m ? 1
(n/n(k) + n/n(k 0 ) ? 1),
6(m + 1)n
n = n(k) + n(k 0 ).
From (Theorem 8.2.7 in [15]) the asymptotic cumulative distribution function of ? 2 can be approximated by a linear combination of ?2 distributions which has a convergence rate of O((?n)?3 ).
To construct the permutation test for H0 Tippett?s combining function [17] is used with H0 :
? = min(?1 /k 1 , ?2 /k 2 ) where ?1 and ?2 are the p-values of the sub-hypothesis tests H01 and
H02 respectively, and k 1 and k 2 are design parameters. If k 1 > k 2 then the mean equality is weighted
more then the covariance equality. If k 1 = k 2 then both mean equality and covariance equality
are weighted equally. For the test statistics ? 1 and ? 2 the p-values are given by ?1 = P (? 1 ? ?01 )
and ?2 = P (? 2 ? ?02 ) where ?01 and ?02 are realizations of the test statistics. To utilize ? as a test
statistic we require the cumulative distribution function of ? . Note that if H01 is true (i.e. mean
equality) then the distributions of ? 1 and ? 2 are independent since ? 1 follows a T 2 distribution which
results in ?1 ? U(0, 1) and ?2 ? U(0, 1) [17]. The cumulative distribution function of ? is given by
P (? ? x) = (k 1 + k 2 )x ? k 1 k 2 x2 for x ? [0, min(1/k 1 , 1/k 2 )]. Given P (? ? x), for a significance
level ?, we reject the null hypothesis H0 ifp? ? ? where ? is the solution
to P (? ? ?) = ?. The
parameter ? is given by: ? = (k 1 + k 2 ) ? (k 1 + k 2 )2 ? 4?k 1 k 2 /(2k 1 k 2 ).
For a given significance level ?, and design parameters k 1 and k 2 , we can test H0 for the samples
{y t }t?Tk and {y t }t?Tk0 by evaluating ?0 = min(?10 /k 1 , ?20 /k 2 ) with ?10 and ?20 the realizations of
the p-values for ?1 and ?2 . By repeatedly applying this hypothesis test to segments {y t }t?Tk for
k ? K we can detect any segments with equal mean and covariance with a significance level ?.
Similar segments can be merged to increase the accuracy of the estimated dynamic model parameters,
or be used to evaluate the quality of the patient?s stochastic model.
4
Estimating Patient?s Clinical State using Clinician Domain-Knowledge
In this section the Algorithm 1 (Fig.1) is presented which constructs stochastic models of patients
based on their historical EHR data and clinician domain-knowledge, and is used to classify the
clinical state of new patients.
Algorithm 1 is composed of five main steps. Step#1 to Step#2 are used to construct the stochastic
? (1). The
models of the patients based on the EHR data D, and to construct the segmented dataset D
stochastic models are constructed using the non-parametric Bayesian inference algorithm from Sec.2.
Step#2 measures the quality of the stochastic models, and iteratively updates the hyper-parameters
of the Bayesian inference algorithm to guarantee the quality of the detected dynamic models as
? is labelled by the clinician,
discussed in Sec.3. In Step#3 each segment (e.g. dynamic model) in D
based on the clinical states of interest, to construct the dataset L. Step#4 and Step#5 involves the
online portion of the algorithm which constructs stochastic models for new patients and estimates
their clinical state based on each patient?s estimated stochastic model. Step#4 constructs the
stochastic model for the new patient, then in Step#5 each unique dynamic model from Step#4 is
associated with a clinical state of interest using the labelled dataset L from Step#3. Note that L
contains several segments (e.g. dynamic models) that are associated with one clinical state. To
estimate the clinical state of the new patient a similarity metric based on the Bhattacharyya distance,
written DB (?), is used. If the minimum Bhattacharyya distance between the new patients segment
k and next closest segment k 0 ? L is greater then ?th the segment is labelled as anomalous, otherwise
the segment is given the label of segment k 0 ? L. Information on the computational complexity
and implementation details of Algorithm 1 are provided in the Supporting Material.
5
Real-World Clinical State Estimation in Cancer Ward
In this section Algorithm 1 is applied to a real-world EHR dataset composed of a cohort of patients
admitted to a cancer ward. A detailed description of the dataset is provided in the Supporting Material.
6
Algorithm 1 Patient Clinical State Estimation
Step#1: Construct stochastic models for each patient using D and the non-parametric Bayesian
? (1).
algorithm presented in Sec.2. Using the stochastic models construct the dataset D
?
Step#2: To evaluate the quality of each stochastic model, each segment in D from Step#1 is tested
for: i) model consistency, ii) sufficient samples to guarantee accuracy of dynamic model
parameter estimates, and iii) statistical uniqueness of segments using the methods in Sec.3.
If the quality is not sufficient then return to Step#1 with updated hyper-parameters for the
non-parametric Bayesian inference algorithm.
? and the clinical states of interest, the clinician constructs the labelled dataset
Step#3: Given D
L = {({y it }t?Tki , lki ), k ? {1, . . . , K i } = Ki }.
Step#4: For a new patient i = 0 with vital signs {y 0t }t?T 0 , construct the stochastic model of the
patient using the Bayesian non-parametric learning algorithm. Then, based on the stochastic
model, construct the segmented vital sign data {{y 0t }t?Tk0 , k ? {1, . . . , K 0 } = K0 }.
Step#5: To estimate the label l(k), written ?l(k), of each segment k ? K0 from Step#4, compute the
solution to the following optimization problem for each k:
n min 0 {D (k, k 0 )} o
k ?Ll
B
if min{DB (k, k 0 )} ? ?th then ?l(k) = ?, else ?l(k) ? argmin
0
0
l?L
min
{D
k ?L?l
B (k, k )}
l?L
with ? the anomalous state, Ll ? L the set of segments that are labeled with l, L?l ? L the
set of all segments that are not labeled as l, and ?th is a threshold. Return to Step#4.
The first step of Algorithm 1 is to segment the EHR data based on the estimated stochastic models
of the patients. Fig.2(a) illustrates the dynamic models of a specific patient?s estimated stochastic
model for ? = 0.1 and S0 = 0.1Im (Im is the identity matrix), and for ? = 1 and S0 = Im . As
seen, for ? = 0.1 and S0 = 0.1Im several segments have insufficient samples for estimating the
model parameters, and are not statistically unique. However the segments resulting from ? = 1 and
S0 = Im provide a stochastic model of sufficient quality where each segment contains sufficient
samples to accurately estimate the model parameters, the segments are statistically unique, and
satisfy the multivariate normality assumption. Therefore we set ? = 1 and S0 = Im to construct the
? from D. The dataset L is constructed by providing the clinician with D
? who
segmented dataset D
then labels each segment as either in the ICU admission clinical state, or non-ICU clinical state.
Dynamic Models
Dynamic Models
15
10
5
0
0
500
1000
1500
10
9
8
7
6
5
4
3
2
1
ICU Admission
0
500
Time [hours]
(a) Dynamic model estimates with {?, S0 } =
{0.1, 0.1Im } (dotted), and {1, Im } (solid).
Algorithm 1
Rothman
MEWS
0.8
0.6
0.4
0.2
0.1
0.2
0.3
0.4
1500
2000
(b) Estimated dynamic models for the intervals of
patient data in Fig.2(d).
Physiological Values
True Positive Rate
1
1000
Time [hours]
0.5
0.6
200
Systolic
100
50
Diastolic
0
Positive Predictive Value
Heart-Rate
150
500
1000
1500
2000
Time [hours]
(c) Trade off between the TPR and PPV. The (d) Physiological signals from the patient with disdashed cross-hair indicates the performance of Al- covered models in Fig.2(b).
gorithm 1 for ?b = 1.
Figure 2: Dynamic model discovery and performance of Algorithm 1.
7
Of critical importance in medical applications is the accuracy and timeliness of the detection of
the clinical state of the patient. Fig.2(b) provides the trade-off between the TPR and PPV between
Algorithm 1, Rothman index [18] which is a state-of-the-art method utilized in many hospitals today,
and MEWS [21] which are dependent on the threshold selected for each. As seen Algorithm 1
has a superior performance compared to these two popular risk scoring methods. For example if
we require the TPR = 71.9%, then the associated PPV values for the Rothman index and MEWS
are 26.1% and 18.0% respectively. There is a 11.3% increase in the PPV value for the Rothman
index, and 19.4% increase in the PPV for MEWS compared to the PPV of Algorithm 1. We also
compare with methods commonly used in medical with the results presented in Table 1. As seen,
Algorithm 1 outperforms all these methods for estimating the patient?s clinical state. There are several
possible reasons that Algorithm 1 outperforms these methods including accounting for therapeutic
interventions and utilizing fine-grained personalization. Note that the results in Table 1 are computed
12 hours prior to ICU admission or hospital discharge. Additionally, the average detection time of
ICU admission or discharge using Algorithm 1 is approximately 24 hours prior to the clinician?s
decision. This timeliness ensures that the patient?s clinical state estimate provides clinicians with
sufficient warning to apply a therapeutic intervention to stabilize the patient.
Table 1: Accuracy of Methods for Predicting ICU Admission
Algorithm
TPR(%) PPV(%)
Algorithm 1
71.9%
37.4%
Rothman Index
53.9%
34.5%
MEWS
28.1%
26.3%
Logistic Regression
55.7%
30.7%
Lasso Regularization
55.8%
30.3%
Random Forest
44.5%
31.1%
SVMs
32.2%
29.9%
A key feature of Algorithm 1 is that it learns the number of unique dynamic models for each patient,
and as more data is collected the number of unique dynamic models discovered may increase. Fig.2(b)
illustrates this process for a patient with associated physiological signals given in Fig.2(d). The
horizontal dashed line indicates the intervals and associated discovered dynamic models. Note that
typical hospitalization time for cancer ward patients in the dataset range from 4 hours to over 85
days. As seen, as more samples are obtained for the patient the number of dynamic models that
describe the patient?s dynamics increase. Additionally, there is good agreement between where the
patient?s dynamics change for the different time intervals. For example the change point at 40 hours
after hospitalization occurs as a result of an increase in the systolic and diastolic blood pressure, and
a decrease in the heart-rate. At 1700 hours the change in state results from a dramatic increase in
both the systolic and diastolic blood pressure, and a decrease in the heart-rate. From Fig.2(d) these
physiological signals were not observed previously, therefore Algorithm 1 correctly detects that this
is a new unique state for the patient. Though Algorithm 1 can identify changes in patient state, the
domain-knowledge from the clinician is required to define the clinical state of the patient. Only
dynamic models 8 and 9 are associated with the ICU admission state.
Further results are provided in the Supporting Material that illustrate how current methods for
constructing risk scores suffer from the bias introduced from therapeutic intervention censoring, and
how a binary threshold ?b can be introduced into Algorithm 1 for controlling the TPR and PPV for
clinical state estimation.
6
Conclusion
In this paper a novel non-parametric learning algorithm for confidently learning stochastic models of
patient?s and classifying their associated clinical state was presented. Compared to state-of-the-art
clinical state estimation methods our algorithm eliminates the bias caused by therapeutic intervention
censoring, is personalized to the patient?s specific dynamics resulting from medical complication
(e.g. disease, drug interactions, physical contusions or fractures), and can detect anomalous clinical
states. The algorithm was applied to real-world patient data from a cancer ward in a large academic
hospital, and found to have a significant improvement in classifying patient?s clinical state in both
accuracy and timeliness compared with current state-of-the-art methods such as the Rothman index.
The algorithm provides valuable information to allow clinicians to make informed decisions about
selecting if a therapeutic intervention is necessary to improve the clinical state of the patients.
8
Acknowledgments
This research was supported by: NSF ECCS 1462245, and the Airforce DDDAS program.
References
[1] M. Bartlett. Properties of sufficiency and statistical tests. Proc. Roy. Soc. London A, 160:268?282, 1937.
[2] D. Basso, F. Pesarin, L. Salmaso, and A. Solari. Permutation Tests. Springer, 2009.
[3] M. Beal, Z. Ghahramani, and C. Rasmussen. The infinite hidden Markov model. In Advances in neural
information processing systems, pages 577?584, 2001.
[4] M. Bilodeau and D. Brenner. Theory of Multivariate Statistics. Springer, 2008.
[5] P. Brockwell and R. Davis. Time series: theory and methods. Springer Science & Business Media, 2013.
[6] M. Bryant and E. Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet
processes. In Advances in Neural Information Processing Systems, pages 2699?2707, 2012.
[7] T. Campbell, J. Straub, J. Fisher, and J. How. Streaming, distributed variational inference for Bayesian
nonparametrics. In Advances in Neural Information Processing Systems, pages 280?288, 2015.
[8] T. Ferguson. A Bayesian analysis of some nonparametric problems. The annals of statistics, pages 209?230,
1973.
[9] E. Fox, M. Jordan, E. Sudderth, and A. Willsky. Sharing features among dynamical systems with beta
processes. In Advances in Neural Information Processing Systems, pages 549?557, 2009.
[10] E. Fox, E. Sudderth, M. Jordan, and A. Willsky. An HDP-HMM for systems with state persistence. In
Proceedings of the 25th international conference on Machine learning, pages 312?319. ACM, 2008.
[11] A. Gelman, J. Carlin, H. Stern, and D. Rubin. Bayesian data analysis, volume 2. Taylor & Francis, 2014.
[12] A. Johnson, M. Ghassemi, S. Nemati, K. Niehaus, D. Clifton, and G. Clifford. Machine learning and
decision support in critical care. Proceedings of the IEEE, 104(2):444?466, 2016.
[13] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample variance penalization. COLT, 2009.
[14] G. Montanez, S. Amizadeh, and N. Laptev. Inertial Hidden Markov Models: Modeling change in
multivariate time series. In AAAI, pages 1819?1825, 2015.
[15] R. Muirhead. Aspects of multivariate statistical theory. Wiley, 1982.
[16] C. Paxton, A. Niculescu-Mizil, and S. Saria. Developing predictive models using electronic medical
records: challenges and pitfalls. In Annual Symposium proceedings/AMIA Symposium. AMIA Symposium,
volume 2013, pages 1109?1115. American Medical Informatics Association, 2012.
[17] F. Pesarin and L. Salmaso. Permutation tests for complex data: theory, applications and software. John
Wiley & Sons, 2010.
[18] M. Rothman, S. Rothman, and J. Beals. Development and validation of a continuous measure of patient
condition using the electronic medical record. Journal of biomedical informatics, 46(5):837?848, 2013.
[19] S. Saria, D. Koller, and A. Penn. Learning individual and population level traits from clinical temporal
data. In Proc. Neural Information Processing Systems (NIPS), Predictive Models in Personalized Medicine
workshop. Citeseer, 2010.
[20] J. Schott. A test for the equality of covariance matrices when the dimension is large relative to the sample
sizes. Computational Statistics & Data Analysis, 51(12):6535?6542, 2007.
[21] P. Subbe, M. Kruger, P. Rutherford, and L. Gemmel. Validation of a modified Early Warning Score in
medical admissions. Qjm, 94(10):521?526, 2001.
[22] Y. W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the american
statistical association, 2012.
[23] C. Tenreiro. An affine invariant multiple test procedure for assessing multivariate normality. Computational
Statistics & Data Analysis, 55(5):1980?1992, 2011.
[24] N. Timm. Applied Multivariate Analysis, volume 1. Springer, 2002.
[25] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov
model. In Proceedings of the 25th international conference on Machine learning, pages 1088?1095. ACM,
2008.
9
| 6454 |@word eliminating:1 polynomial:1 norm:4 km:2 covariance:16 accounting:1 citeseer:1 pressure:2 dramatic:1 tr:1 solid:1 initial:1 contains:7 score:3 selecting:1 series:2 bhattacharyya:2 outperforms:2 reaction:1 current:2 mihaela:2 must:5 written:3 john:1 remove:1 interpretable:1 update:2 prohibitive:1 selected:3 record:5 blei:1 provides:6 detecting:2 complication:2 location:1 five:1 admission:7 vs0:1 constructed:5 beta:4 symposium:3 combine:1 autocorrelation:2 expected:1 detects:1 decreasing:1 spherical:1 pitfall:1 actual:1 cardinality:2 increasing:3 provided:8 estimating:8 spain:1 bounded:1 medium:1 null:4 straub:1 argmin:1 interpreted:1 informed:1 warning:2 guarantee:6 temporal:4 multidimensional:1 bryant:1 rm:3 stick:1 control:4 normally:2 medical:11 intervention:13 yn:1 penn:1 positive:2 ecc:1 engineering:2 tki:3 limit:1 switching:4 amia:2 approximately:2 merge:1 dynamically:1 challenging:1 range:1 statistically:4 diastolic:3 unique:17 acknowledgment:1 testing:4 procedure:1 pontil:1 empirical:5 drug:2 significantly:1 reject:1 persistence:1 confidence:3 convenience:1 gelman:1 risk:5 applying:2 writing:1 map:1 yt:8 duration:1 independently:1 m2:1 adjusts:1 estimator:1 utilizing:1 importantly:1 muirhead:1 population:2 updated:4 discharge:2 construction:2 play:1 today:1 controlling:1 annals:1 hypothesis:16 agreement:1 element:1 roy:1 approximated:1 particularly:1 utilized:4 gorithm:1 schaar:1 labeled:4 observed:2 role:1 electrical:2 ensures:2 decrease:3 removed:1 trade:2 valuable:1 disease:2 supposedly:1 complexity:4 dynamic:49 segment:41 laptev:1 predictive:3 triangle:1 k0:2 describe:1 london:1 detected:6 hyper:10 h0:11 larger:1 valued:2 otherwise:2 ability:1 statistic:11 ward:7 final:1 online:2 beal:2 interaction:1 combining:3 realization:2 entered:2 basso:1 poorly:1 brockwell:1 description:1 validate:1 los:4 convergence:1 assessing:1 tk:6 illustrate:1 develop:1 measured:1 eq:2 soc:1 involves:1 h01:5 merged:1 stochastic:33 material:5 require:2 generalization:2 h11:2 rothman:8 im:8 sufficiently:1 bilodeau:1 elevating:1 normal:2 exp:2 major:1 early:1 uniqueness:1 estimation:13 proc:2 combinatorial:1 label:4 sensitive:2 concurrent:1 largest:1 tool:2 weighted:2 gaussian:7 modified:2 mobile:1 corollary:1 validated:1 focus:1 notational:1 vk:2 improvement:1 indicates:4 likelihood:2 detect:4 inference:14 dependent:1 streaming:1 vl:2 ferguson:1 niculescu:1 spurious:1 hidden:4 koller:1 misclassified:1 among:2 colt:1 retaining:1 development:1 art:3 equal:2 construct:23 schott:1 atom:1 sampling:1 identical:2 placing:1 represents:2 composed:5 simultaneously:1 gamma:1 individual:1 saatci:1 william:1 freedom:1 detection:3 interest:5 weakness:1 mixture:1 arrives:1 hospitalization:2 personalization:3 truly:1 necessary:8 fox:3 taylor:1 maurer:1 smg:3 classify:1 modeling:1 deviation:1 uniform:1 johnson:1 dir:2 density:1 international:2 probabilistic:1 vm:2 off:2 informatics:2 clifford:1 aaai:1 recorded:1 possibly:1 wishart:2 american:2 return:2 account:1 sec:5 b2:1 includes:1 stabilize:1 satisfy:2 caused:3 explicitly:1 stream:2 bone:1 h1:4 francis:1 portion:1 bayes:1 nemati:1 timely:1 accuracy:9 variance:2 who:1 identify:1 yes:1 generalize:1 weak:1 bayesian:20 accurately:3 sharing:2 evaluates:1 associated:21 proof:1 therapeutic:13 dataset:14 popular:1 recall:1 knowledge:6 segmentation:8 inertial:1 campbell:1 supervised:1 day:1 improved:2 sufficiency:1 nonparametrics:1 though:2 biomedical:1 horizontal:1 defines:1 logistic:1 quality:15 grows:1 effect:1 contain:1 unbiased:1 true:2 equality:16 regularization:1 symmetric:1 iteratively:4 white:1 ll:2 bonferroni:2 self:1 davis:1 variational:4 consideration:1 novel:2 common:1 ifp:1 superior:1 physical:1 exponentially:1 volume:3 discussed:1 association:2 tpr:5 trait:1 refer:1 measurement:1 significant:1 gibbs:1 ehr:5 consistency:3 similarity:5 timm:1 base:1 multivariate:14 posterior:3 closest:1 driven:3 inequality:5 binary:1 der:1 scoring:1 seen:5 minimum:5 greater:1 care:3 relaxed:1 signal:4 ii:1 hpd:1 multiple:2 desirable:1 dashed:1 segmented:4 academic:3 clinical:52 cross:1 equally:1 plugging:1 tippett:1 ensuring:1 schematic:2 anomalous:6 regression:1 hair:1 patient:88 metric:4 represent:1 beam:2 fine:4 interval:3 else:1 sudderth:3 biased:1 eliminates:1 db:2 jordan:3 ee:1 bernstein:7 vital:14 cohort:1 iii:1 carlin:1 lasso:1 idea:1 angeles:4 bartlett:2 suffer:1 repeatedly:1 gael:1 detailed:2 covered:1 kruger:1 nonparametric:2 concentrated:1 svms:1 exist:2 nsf:1 notice:1 dotted:1 timeliness:4 sign:13 estimated:13 correctly:1 key:1 four:2 threshold:3 blood:2 utilize:8 asymptotically:1 inverse:2 powerful:2 place:1 electronic:5 h12:2 decision:3 bound:14 ki:3 annual:1 x2:1 software:1 personalized:5 ucla:2 aspect:1 min:6 performing:1 department:2 developing:1 according:1 alternate:1 combination:2 poor:1 h02:4 son:1 invariant:3 heart:3 computationally:1 ln:3 equation:1 previously:4 needed:2 fracture:2 available:1 apply:3 hierarchical:4 spectral:2 hotelling:1 denotes:1 dirichlet:5 ensure:5 include:1 paradoxical:1 medicine:1 ghahramani:2 occurs:2 parametric:21 primary:1 concentration:1 dp:4 distance:2 link:1 hmm:17 originate:1 collected:1 reason:1 willsky:2 assuming:2 hdp:9 systolic:3 index:6 insufficient:1 ratio:1 providing:1 equivalently:1 relate:1 lki:1 stated:1 rutherford:1 suppress:1 design:3 guideline:1 zt:13 implementation:1 stern:1 teh:2 observation:1 markov:4 finite:1 supporting:5 variability:1 y1:1 discovered:2 inferred:1 introduced:4 required:3 california:2 barcelona:1 hour:8 nip:2 address:1 dynamical:1 confidently:4 challenge:2 program:1 including:2 max:4 power:1 critical:2 treated:1 business:1 ppv:8 predicting:1 mizil:1 normality:4 improve:3 health:5 review:1 literature:1 prior:13 geometric:1 discovery:1 evolve:1 determining:1 asymptotic:1 relative:1 fully:1 permutation:5 limitation:2 validation:3 penalization:1 triaging:1 degree:1 affine:2 sufficient:8 consistent:2 s0:8 rubin:1 suspected:1 classifying:2 censoring:7 cancer:6 supported:1 wireless:1 truncation:1 rasmussen:1 offline:1 bias:6 formal:1 allow:2 van:2 benefit:2 overcome:1 dimension:1 distributed:3 tolerance:1 world:4 valid:1 transition:8 evaluating:2 cumulative:3 tk0:2 collection:1 commonly:1 universally:1 historical:1 polynomially:1 observable:1 countably:2 unreliable:1 global:1 b1:4 assumed:1 continuous:1 table:3 additionally:4 learn:3 ca:2 forest:1 complex:1 constructing:2 domain:6 icu:7 significance:3 main:2 noise:1 fig:11 wiley:2 sub:3 wish:1 acf:2 breaking:1 learns:2 grained:4 theorem:5 transitioning:1 specific:5 mitigates:1 jensen:1 decay:3 physiological:6 workshop:1 importance:2 illustrates:3 nk:5 mew:5 admitted:1 scalar:1 springer:4 clifton:1 satisfies:1 acm:2 goal:1 identity:1 informing:1 towards:1 labelled:4 shared:1 brenner:1 yti:3 saria:3 change:5 fisher:1 specifically:1 clinician:18 infinite:9 uniformly:1 sampler:3 typical:1 total:2 hospital:7 select:3 formally:1 support:1 solari:1 evaluate:6 mcmc:1 tested:1 |
6,030 | 6,455 | Following the Leader and Fast Rates in Linear
Prediction: Curved Constraint Sets and Other
Regularities
Ruitong Huang
Department of Computing Science
University of Alberta, AB, Canada
ruitong@ualberta.ca
Tor Lattimore
School of Informatics and Computing
Indiana University, IN, USA
tor.lattimore@gmail.com
Andr?s Gy?rgy
Dept. of Electrical & Electronic Engineering
Imperial College London, UK
a.gyorgy@imperial.ac.uk
Csaba Szepesv?ri
Department of Computing Science
University of Alberta, AB, Canada
szepesva@ualberta.ca
Abstract
The follow the leader (FTL) algorithm, perhaps the simplest of all online learning
algorithms, is known to perform well when the loss functions it is used on are positively curved. In this paper we ask whether there are other ?lucky? settings when
FTL achieves sublinear, ?small? regret. In particular, we study the fundamental
problem of linear prediction over a non-empty convex, compact domain. Amongst
other results, we prove that the curvature of the boundary of the domain can act as
if the losses were curved: In this case, we prove that as long as the mean of the loss
vectors have positive lengths bounded away from zero, FTL enjoys a logarithmic
growth rate of regret, while, e.g., for polyhedral domains and stochastic data it
enjoys finite expected regret. Building on a previously known meta-algorithm, we
also get an algorithm that simultaneously enjoys the worst-case guarantees and the
bound available for FTL.
1
Introduction
Learning theory traditionally has been studied in a statistical framework, discussed at length, for
example, by Shalev-Shwartz and Ben-David [2014]. The issue with this approach is that the analysis
of the performance of learning methods seems to critically depend on whether the data generating
mechanism satisfies some probabilistic assumptions. Realizing that these assumptions are not
necessarily critical, much work has been devoted recently to studying learning algorithms in the socalled online learning framework [Cesa-Bianchi and Lugosi, 2006]. The online learning framework
makes minimal assumptions about the data generating mechanism, while allowing one to replicate
results of the statistical framework through online-to-batch conversions [Cesa-Bianchi et al., 2004].
By following a minimax approach, however, results proven in the online learning setting, at least
initially, led to rather conservative results and algorithm designs, failing to capture how more regular,
?easier? data, may give rise to faster learning speed. This is problematic as it may suggest overly
conservative learning strategies, missing opportunities to extract more information when the data is
nicer. Also, it is hard to argue that data resulting from passive data collection, such as weather data,
would ever be adversarially generated (though it is equally hard to defend that such data satisfies
precise stochastic assumptions). Realizing this issue, during recent years much work has been devoted
to understanding what regularities and how can lead to faster learning speed. For example, much
work has been devoted to showing that faster learning speed (smaller ?regret?) can be achieved in
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the online convex optimization setting when the loss functions are ?curved?, such as when the loss
functions are strongly convex or exp-concave, or when the losses show small variations, or the best
prediction in hindsight has a small total loss, and that these properties can be exploited in an adaptive
manner (e.g., Merhav and Feder 1992, Freund and Schapire 1997, Gaivoronski and Stella 2000,
Cesa-Bianchi and Lugosi 2006, Hazan et al. 2007, Bartlett et al. 2007, Kakade and Shalev-Shwartz
2009, Orabona et al. 2012, Rakhlin and Sridharan 2013, van Erven et al. 2015, Foster et al. 2015).
In this paper we contribute to this growing literature by studying online linear prediction and the
follow the leader (FTL) algorithm. Online linear prediction is arguably the simplest of all the learning
settings, and lies at the heart of online convex optimization, while it also serves as an abstraction of
core learning problems such as prediction with expert advice. FTL, the online analogue of empirical
risk minimization of statistical learning, is the simplest learning strategy, one can think of. Although
the linear setting of course removes the possibility of exploiting the curvature of losses, as we will
see, there are multiple ways online learning problems can present data that allows for small regret,
even for FTL. As is it well known, in the worst case, FTL suffers a linear regret (e.g., Example 2.2 of
Shalev-Shwartz [2012]). However, for ?curved? losses (e.g., exp-concave losses), FTL was shown
to achieve small (logarithmic) regret (see, e.g., Merhav and Feder [1992], Cesa-Bianchi and Lugosi
[2006], Gaivoronski and Stella [2000], Hazan et al. [2007]).
In this paper we take a thorough look at FTL in the case when the losses are linear, but the problem
perhaps exhibits other regularities. The motivation comes from the simple observation that, for
prediction over the simplex, when the loss vectors are selected independently of each other from
a distribution with a bounded support with a nonzero mean, FTL quickly locks onto selecting the
loss-minimizing vertex of the simplex, achieving finite expected regret. In this case, FTL is arguably
an excellent algorithm. In fact, FTL is shown to be the minimax optimizer for the binary losses in the
stochastic expert setting in the paper of Kot?owski [2016]. Thus, we ask the question of whether there
are other regularities that allow FTL to achieve nontrivial performance guarantees. Our main result
shows that when the decision set (or constraint set) has a sufficiently ?curved? boundary and the
linear loss is bounded away from 0, FTL is able to achieve logarithmic regret even in the adversarial
setting, thus opening up a new way to prove fast rates based on not on the curvature of losses, but on
that of the boundary of the constraint set and non-singularity of the linear loss. In a matching lower
bound we show that this regret bound is essentially unimprovable. We also show an alternate bound
for polyhedral constraint sets, which allows us to prove that (under certain technical conditions) for
stochastic problems the expected regret of FTL will be finite. To finish, we use (A, B)-prod
? of Sani
et al. [2014] to design an algorithm that adaptively interpolates between the worst case O( n) regret
and the smaller regret bounds, which we prove here for ?easy data.? Simulation results on artificial
data to illustrate the theory complement the theoretical findings, though due to lack of space these are
presented only in the long version of the paper [Huang et al., 2016].
While we believe that we are the first to point out that the curvature of the constraint set W can help
in speeding up learning, this effect is known in convex optimization since at least the work of Levitin
and Polyak [1966], who showed that exponential rates are attainable for strongly convex constraint
sets if the norm of the gradients of the objective function admit a uniform lower bound. More recently,
Garber and Hazan [2015] proved an O(1/n2 ) optimization error bound (with problem-dependent
constants) for the Frank-Wolfe algorithm for strongly convex and smooth objectives and strongly
convex constraint sets. The effect of ?
the shape of the constraint set was also discussed by AbbasiYadkori [2010] who demonstrated O( n) regret in the linear bandit setting. While these results at a
high level are similar to ours, our proof technique is rather different than that used there.
2
Preliminaries, online learning and the follow the leader algorithm
We consider the standard framework of online convex optimization, where a learner and an environment interact in a sequential manner in n rounds: In round every round t = 1, . . . , n, first the
learner predicts wt ? W. Then the environment picks a loss function `t ? L, and the learner suffers
loss `t (wt ) and observes `t . Here, W is a non-empty, compact convex subset of Rd and L is a set
of convex functions, mapping W to the reals. The elements of L are called loss functions. The
performance of the learner is measured in terms of its regret,
Rn =
n
X
`t (wt ) ? min
w?W
t=1
2
n
X
t=1
`t (w) .
The simplest possible case, which will be the focus of this paper, is when the losses are linear, i.e.,
when `t (w) = hft , wi for some ft ? F ? Rd . In fact, the linear case is not only simple, but is also
fundamental since the case of nonlinear loss functions can be reduced to it: Indeed, even if the losses
are nonlinear, defining ft ? ?`t (wt ) to be a subgradient1 of `t at wt and letting `?t (u) = hft , ui, by
the definition of subgradients, `t (wt ) ? `t (u) ? `t (wt ) ? (`t (wt ) + hft , u ? wt i) = `?t (wt ) ? `?t (u),
hence for any u ? W,
X
X
X
X
`t (u) ?
`?t (wt ) ?
`?t (u) .
`t (wt ) ?
t
t
t
t
In particular, if an algorithm keeps the regret small no matter how the linear losses are selected (even
when allowing the environment to pick losses based on the choices of the learner), the algorithm can
also be used to keep the regret small in the nonlinear case. Hence, in what follows we will study the
linear case `t (w) = hft , wi and, in particular, we will study the regret of the so-called ?Follow The
Leader? (FTL) learner, which, in round t ? 2 picks
wt = argmin
w?W
t?1
X
`i (w) .
i=1
For the first round, w1 ? W is picked in an arbitrary manner. When W is compact, the optimal w of
Pt?1
minw?W i=1 hw, ft i is attainable, which we will assume henceforth. If multiple minimizers exist,
we simply fix one of them as wt . We will also assume that F is non-empty, compact and convex.
2.1
Support functions
Pt
Let ?t = ? 1t i=1 fi be the negative average of the first t vectors in (ft )nt=1 , ft ? F. For
convenience, we define ?0 := 0. Thus, for t ? 2,
wt = argmin
w?W
t?1
X
hw, fi i = argminhw, ??t?1 i = argmaxhw, ?t?1 i .
w?W
i=1
w?W
Denote by ?(?) = maxw?W hw, ?i the so-called support function of W. The support function,
being the maximum of linear and hence convex functions, is itself convex. Further ? is positive
homogenous:
for a ? 0 and ? ? Rd , ?(a?) = a?(?). It follows then that the epigraph epi(?) =
(?, z) | z ? ?(?), z ? R, ? ? Rd of ? is a cone, since for any (?, z) ? epi(?) and a ? 0, az ?
a?(?) = ?(a?), (a?, az) ? epi(?) also holds.
The differentiability of the support function is closely tied to whether in the FTL algorithm the choice
of wt is uniquely determined:
Proposition 2.1. Let W =
6 ? be convex and closed. Fix ? and let Z := {w ? W | hw, ?i = ?(?)}.
Then, ??(?) = Z and, in particular, ?(?) is differentiable at ? if and only if maxw?W hw, ?i has
a unique optimizer. In this case, ??(?) = argmaxw?W hw, ?i.
The proposition follows from Danskin?s theorem when W is compact (e.g., Proposition B.25 of
Bertsekas 1999), but a simple direct argument can also be used to show that it also remains true even
when W is unbounded.2 By Proposition 2.1, when ? is differentiable at ?t?1 , wt = ??(?t?1 ).
3
Non-stochastic analysis of FTL
We start by rewriting the regret of FTL in an equivalent form, which shows that we can expect FTL
to enjoy a small regret when successive weight vectors move little. A noteworthy feature of the next
proposition is that rather than bounding the regret from above, it gives an equivalent expression for it.
Proposition 3.1. The regret Rn of FTL satisfies
Rn =
n
X
t hwt+1 ? wt , ?t i .
t=1
1
We let ?g(x) denote the subdifferential of a convex
function g : dom(g) ? R at x, i.e., ?g(x) =
? ? Rd | g(x0 ) ? g(x) + h?, x0 ? xi ?x0 ? dom(g) , where dom(g) ? Rd is the domain of g.
2
The proofs not given in the main text can be found in the long version of the paper [Huang et al., 2016].
3
The result is a direct corollary of Lemma 9 of McMahan [2010], which holds for any sequence of
losses,
Pn even in the lack of convexity. It is also a tightening of the well-known inequality Rn ?
t=1 `t (wt ) ? `t (wt+1 ), which again holds for arbitrary loss sequences (e.g., Lemma 2.1 of ShalevShwartz [2012]). To keep the paper self-contained, we give an elegant, short direct proof, based on
the summation by parts formula:
Proof.
Pn The summation by parts formula states
Pn that for any u1 , v1 , . . . , un+1 , vn+1 reals,
vt+1 . Applying this to the deft=1 ut (vt+1 ? vt ) = (ut+1 vt+1 ? u1 v1 ) ?
t=1 (ut+1 ? ut )P
n
inition of regret with ut := wt,? and vt+1 P
:= t?t , we get Rn = ? t=1 hwt , t?t ? (t ? 1)?t?1 i +
hhhh
h hh
n
hwn+1 , n?n i = ? {hwn+1 , n?
hnhi ? 0 ? t=1 hwt+1 ? wt , t?t i} + hwh
hnhi.
n+1 , n?
Our next proposition gives another formula that is equal to the regret. As opposed to the previous
result, this formula is appealing as it is independent of wt ; but it directly connects the sequence
(?t )t to the geometric properties of W through the support function ?. For this proposition we will
momentarily assume that ? is differentiable at (?t )t?1 ; a more general statement will follow later.
Proposition 3.2. If ? is differentiable at ?1 , . . . , ?n ,
Rn =
n
X
t D? (?t , ?t?1 ) ,
(1)
t=1
where D? (?0 , ?) = ?(?0 ) ? ?(?) ? h??(?), ?0 ? ?i is the Bregman divergence of ? and we use the
convention that ??(0) = w1 .
Proof. Let v = argmaxw?W hw, ?i, v 0 = argmaxw?W hw, ?0 i. When ? is differentiable at ?,
D? (?0 , ?) = ?(?0 ) ? ?(?) ? h??(?), ?0 ? ?i = hv 0 , ?0 i? hv, ?i ? hv, ?0 ? ?i = hv 0 ? v, ?0 i . (2)
Pn
Pn
Therefore, by Proposition 3.1, Rn = t=1 thwt+1 ? wt , ?t i = t=1 t D? (?t , ?t?1 ).
When ? is non-differentiable at some of the points ?1 , . . . , ?n , the equality in the above proposition can be replaced with inequalities. Defining the upper Bregman divergence D? (?0 , ?) =
supw???(?) ?(?0 ) ? ?(?) ? hw, ?0 ? ?i and the lower Bregman divergence D? (?0 , ?) similarly with
inf instead of sup, similarly to Proposition 3.2, we obtain
n
X
t D? (?t , ?t?1 ) ? Rn ?
t=1
3.1
n
X
t D? (?t , ?t?1 ) .
(3)
t=1
Constraint sets with positive curvature
The previous results shows in an implicit fashion that the curvature of W controls the regret. We now
present our first main result that makes this connection explicit. Denote the boundary of W by bd(W).
For this result, we shall assume that W is C 2 , that is, bd(W) is a twice continuously differentiable
submanifold of Rd . Recall that in this case the principal curvatures of W at w ? bd(W) are the
eigenvalues of ?uW (w), where uW : bd(W) ? Sd?1 , the so-called Gauss map, maps a boundary
point w ? bd(W) to the unique outer normal vector to W at w.3 As it is well known, ?uW (w) is a
self-adjoint operator, with nonnegative eigenvalues, thus the principal curvatures are nonnegative.
Perhaps a more intuitive, yet equivalent definition, is that the principal eigenvalues are the eigenvalues
of the Hessian of f = fw in the parameterization t 7? w + t ? fw (t)uW (w) of bd(W) which is valid
in a small open neighborhood of w, where fw : Tw W ? [0, ?) is a suitable convex, nonnegative
valued function that also satisfies fw (0) = 0 and where Tw W, a hyperplane of Rd , denotes the
tangent space of W at w, obtained by taking the support plane H of W at w and shifting it by ?w.
Thus, the principal curvatures at some point w ? bd(W) describe the local shape of bd(W) up to
the second order.
A related concept that has been used in convex optimization to show fast rates is that of a strongly
convex constraint set [Levitin and Polyak, 1966, Garber and Hazan, 2015]: W is ?-strongly convex
3 d?1
S
= x ? Rd | kxk2 = 1 denotes the unit sphere in d-dimensions. All differential geometry concept
and results that we need can be found in Section 2.5 of [Schneider, 2014].
4
with respect to the norm k?k if, for any x, y ? W and ? ? [0, 1], the k?k-ball with origin ?x+(1??)y
2
and radius ?(1 ? ?)? kx ? yk /2 is included in W. One can show that a closed convex set W is
?-strongly convex with respect to k?k2 if and only if the principal curvatures of the surface bdW are
all at least ?.
Our next result connects the principal curvatures of bd(W) to the regret of FTL and shows that FTL
enjoys logarithmic regret for highly curved surfaces, as long as k?t k2 is bounded away from zero.
Theorem 3.3. Let W ? Rd be a C 2 convex body with d ? 2.4 Let M = maxf ?F kf k2 and assume
that ? is differentiable at (?t )t . Assume that the principal curvatures of the surface bd(W) are all
at least ?0 for some constant ?0 > 0 and Ln := min1?t?n k?t k2 > 0. Choose w1 ? bd(W). Then
Rn ?
2M 2
(1 + log(n)) .
?0 Ln
As we will show later in an essentially matching lower
bound, this bound is tight, showing that the forte of FTL is
when Ln is bounded away from zero and ?0 is large. Note
?2
that the bound is vacuous as soon as Ln = O(log(n)/n)
?
and is worse than?the minimax bound of O( n) when
?2
w(2)
Ln = o(log(n)/ n). One possibility to reduce the
bound?s sensitivity to Ln is to use the trivial bound
?(s)
w(1)
hwt+1 ? wt , ?t i ? LW = L supw,w0 ?W kw ? w0 k2 for
indices t when k?t k ? L. Then, by optimizing the bound
?1
P
over L, one gets a data-dependent bound of the form
Pn
2
inf L>0 2M
(1
+
log(n))
+
LW
,
t
I
(k?
k
?
L)
t
t=1
?0 L
which is more complex, but is free of Ln and thus reflects
Figure 1: Illustration of the conthe nature of FTL better. Note that in the case of stochastic
struction used in the proof of (4).
problems, where f1 , . . . , fn are independent and identically
distributed (i.i.d.) with ? := ?E [?t ] 6= 0, the probability
that k?t k2 < k?k2 /2 is exponentially small in t. Thus, selecting L = k?k2 /2 in the previous
bound, the contribution of the expectation of the second term is O(k?k2 W ), giving an overall bound
2
of the form O( ?0M
k?k2 log(n) + k?k2 W ). After the proof we will provide some simple examples
that should make it more intuitive how the curvature of W helps keeping the regret of FTL small.
f
c
f
Proof. Fix ?1 , ?2 ? Rd and let w(1) = argmaxw?W hw, ?1 i, w(2) = argmaxw?W hw, ?2 i. Note that
if ?1 , ?2 6= 0 then w(1) , w(2) ? bd(W). Below we will show that
hw(1) ? w(2) , ?1 i ?
1 k?2 ? ?1 k22
.
2?0 k?2 k2
(4)
Proposition 3.1 suggests that it suffices to bound hwt+1 ? wt , ?t i. By (4), we see that it suffices to
bound how much ?t moves. A straightforward calculation shows that ?t cannot move much:
Lemma 3.4. For any norm k?k on F, we have k?t ? ?t?1 k ? 2t M , where M = maxf ?F kf k is a
constant that depends on F and the norm k?k.
Combining inequality (4) with Proposition 3.1 and Lemma 3.4, we get
n
n
X
X
t k?t ? ?t?1 k22
Rn =
thwt+1 ? wt , ?t i ?
2?0 k?t?1 k2
t=1
t=1
?
n
n
2M 2 X
1
2M 2 X 1
2M 2
?
?
(1 + log(n)) .
?0 t=1 tk?t?1 k2
?0 Ln t=1 t
?0 Ln
To finish the proof, it thus remains to show (4).
The following elementary lemma relates the cosine of the angle between two vectors ?1 and ?2 to the
squared normalized distance between the two vectors, thereby reducing our problem to bounding the
cosine of this angle. For brevity, we denote by cos(?1 , ?2 ) the cosine of the angle between ?1 and ?2 .
4
Following Schneider [2014], a convex body of Rd is any non-empty, compact, convex subset of Rd .
5
Lemma 3.5. For any non-zero vectors ?1 , ?2 ? Rd ,
1 ? cos(?1 , ?2 ) ?
1 k?1 ? ?2 k22
.
2 k?1 k2 k?2 k2
(5)
With this result, we see that it suffices to upper bound cos(?1 , ?2 ) by 1 ? ?0 hw(1) ? w(2) , k??11k2 i.
To develop this bound, let ??i = ?i for i = 1, 2. The angle between ?1 and ?2 is the same as the
k?i k2
angle between the normalized vectors ??1 and ??2 . To calculate the cosine of the angle between ??1
and ??2 , let P be a plane spanned by ??1 and w(1) ? w(2) and passing through w(1) (P is uniquely
determined if ??1 is not parallel to w(1) ? w(2) ; if there are multiple planes, just pick any of them).
Further, let ??2 ? Sd?1 be the unit vector along the projection of ??2 onto the plane P , as indicated in
Fig. 1. Clearly, cos(??1 , ??2 ) ? cos(??1 , ??2 ).
Consider a curve ?(s) on bd(W) connecting w(1) and w(2) that is defined by the intersection of
bd(W) and P and is parametrized by its curve length s so that ?(0) = w(1) and ?(l) = w(2) , where
l is the length of the curve ? between w(1) and w(2) . Let uW (w) denote the outer normal vector to W
at w as before, and let u? : [0, l] ? Sd?1 be such that u? (s) = ?? where ?? is the unit vector parallel
to the projection of uW (?(s)) on the plane P . By definition, u? (0) = ??1 and u? (l) = ??2 . Note that
in fact ? exists in two versions since W is a compact convex body, hence the intersection of P and
bd(W) is a closed curve. Of these two versions we choose the one that satisfies that h? 0 (s), ??1 i ? 0
for s ? [0, l].5 Given the above, we have
Z l
DZ l
E
0
?
?
?
?
?
?
?
?
cos(?1 , ?2 ) = h?2 , ?1 i = 1+ h?2 ? ?1 , ?1 i = 1+
u? (s) ds, ?1 = 1+ hu0? (s), ??1 i ds. (6)
0
0
Note that ? is a planar curve on bd(W), thus its curvature ?(s) satisfies ?(s) ? ?0 for s ? [0, l].
Also, for any w on the curve ?, ? 0 (s) is a unit vector parallel to P . Moreover, u0? (s) is parallel to
? 0 (s) and ?(s) = ku0? (s)k2 . Therefore,
hu0? (s), ??1 i = ku0? (s)k2 h? 0 (s), ??1 i ? ?0 h? 0 (s), ??1 i,
where the last inequality holds because h? 0 (s), ??1 i ? 0. Plugging this into (6), we get the desired
Z l
DZ l
E
0
?
?
?
h? (s), ?1 i ds = 1 + ?0
? 0 (s) ds, ??1 = 1 ? ?0 hw(1) ? w(2) , ??1 i .
cos(?1 , ?2 ) ? 1 + ?0
0
0
Reordering and combining with (5) we obtain
1
1
1 k?1 ? ?2 k22
hw(1) ? w(2) , ??1 i ?
1 ? cos(??1 , ??2 ) ?
(1 ? cos(?1 , ?2 )) ?
.
?0
?0
2?0 k?1 k2 k?2 k2
Multiplying both sides by k?1 k2 gives (4), thus, finishing the proof.
Example 3.6. The smallest principal curvature of some common convex bodies are as follows:
? The smallest principal curvature ?0 of the Euclidean ball W = {w | kwk2 ? r} of radius r
satisfies ?0 = 1r .
?
? Let Q be a positive definite matrix. If W = w | w> Qw ? 1 then ?0 = ?min / ?max ,
where ?min and ?max are the minimal, respectively, maximal eigenvalues of Q.
? In general, let ? : Rd ? R be a C 2 convex function. Then, for W = {w | ?(w) ? 1},
> 2
?(w)v
?0 = minw?bd(W) minv : kvk2 =1,v??0 (w) v k??0 (w)k
.
2
?
In the stochastic i.i.d. case, when E [?t ] = ??, we have k?t + ?k2 = O(1/ t) with high probability.
Thus say, for W being the unit
ball of Rd , one has wt = ?t / k?t k2 ; therefore, a crude bound suggests
?
?
that kwt ? w? k2 = O(1/ t), overall predicting that E [Rn ] = O( n), while the previous result
predicts that Rn is much smaller. In the next example we look at the unit ball, to explain geometrically,
what ?causes? the smaller regret.
5 0
? and u0? denote the derivatives of ? and u, respectively, which exist since W is C 2 .
6
Example 3.7. Let W = {w | kwk2 ? 1} and consider a stochastic setting where the fi are i.i.d.
samples from some underlying distribution with expectation E [fi ] = ? = (?1, 0, . . . , 0) and
kfi k? ? M . It is straightforward to see that w? = (1, 0, . . . , 0), and thus hw? , ?i = ?1. Let
E = {?? | k? ? ?k2 ? }. As suggested beforehand, we expect ??t ? E with high probability. As
# ? # ?
# ?
? ODi ? 1 = |BD|.
?
shown in Fig. 2, the excess loss of an estimate OA is hOA,
Similarly, the excess
# ?0
loss of an estimate OA in the figure is |CD|. Therefore, for an estimate ??t ? E, the point A is
where the largest excess loss is incurred. The triangle OAD is similar to the triangle ADB. Thus
|BD|
|AD|
2
?
|AD| = |OD| . Therefore, |BD| = and since |BD| ? |BD|, if k?t ? ?k2 ? , the excess error is
2
at most = O(1/t), making the regret Rn = O(log n).
Our last result in this section is an asymptotic lower bound for
the linear game, showing that FTL achieves the optimal rate
under the condition that mint k?t k2 ? L > 0.
Theorem 3.8. Let h, L
?
(0, 1).
Assume
that
{(1,
?L),
(?1,
?L)}
?
F
and
let
W
=
(x, y) : x2 + y 2 /h2 ? 1 be an ellipsoid with principal curvature h. Then, for any learning strategy, there exists a
sequence of losses in F such that Rn = ? (log(n)/(Lh)) and
k?t k2 ? L for all t.
3.2
d
A? = w
t
A = ??t
O
B B?
C D = w?
= ??
A0
A?0
Other regularities
So far we have looked at the case when FTL achieves a low
regret due to the curvature of bd(W). The next result characterizes the regret of FTL when W is a polyhedron, which
has a flat, non-smooth boundary and thus Theorem 3.3 is not
applicable. For this statement recall that given some norm k ? k,
its dual norm is defined by kwk? = supkvk?1 hv, wi.
Figure 2: Illustration of how
curvature helps to keep the regret small.
Theorem 3.9. Assume that W is a polyhedron and that ? is differentiable at ?i , i = 1, . . . , n.
Let wt = argmaxw?W hw, ?t?1 i, W = supw1 ,w2 ?W kw1 ? w2 k? and F = supf1 ,f2 ?F kf1 ? f2 k.
Then the regret of FTL is
Rn ? W
n
X
t I(wt+1 6= wt )k?t ? ?t?1 k ? F W
t=1
n
X
I(wt+1 6= wt ) .
t=1
Note that when W is a polyhedron, wt is expected to ?snap? to some vertex of W. Hence, we expect
the regret bound to be non-vacuous, if, e.g., ?t ?stabilizes? around some value. Some examples after
the proof will illustrate this.
Proof. Let v = argmaxw?W hw, ?i, v 0 = argmaxw?W hw, ?0 i. Similarly to the proof of Theorem 3.3,
hv 0 ? v, ?0 i = hv 0 , ?0 i ? hv 0 , ?i + hv 0 , ?i ? hv, ?i + hv, ?i ? hv, ?0 i
? hv 0 , ?0 i ? hv 0 , ?i + hv, ?i ? hv, ?0 i = hv 0 ? v, ?0 ? ?i ? W I(v 0 6= v)k?0 ? ?k,
where the first inequality holds because hv 0 , ?i ? hv, ?i. Therefore, by Lemma 3.4,
Rn =
n
X
t=1
t hwt+1 ? wt , ?t i ? W
n
X
t I(wt+1 6= wt )k?t ? ?t?1 k ? F W
t=1
n
X
I(wt+1 6= wt ) .
t=1
As noted before, since W is a polyhedron, wt is (generally) attained at the vertices. In this case, the
epigraph of ? is a polyhedral cone. Then, the event when wt+1 6= wt , i.e., when the ?leader? switches
corresponds to when ?t and ?t?1 belong to different linear regions corresponding to different linear
pieces of the graph of ?.
We now spell out a corollary for the stochastic setting. In particular, in this case FTL will often enjoy
a constant regret:
7
Corollary 3.10 (Stochastic setting). Assume that (ft )1?t?n is an i.i.d. sequence of random variables
such that E [fi ] = ? and kfi k? ? M . Let W = supw1 ,w2 ?W kw1 ? w2 k1 . Further assume that
there exists a constant r > 0 such that ? is differentiable for any ? such that k? ? ?k? ? r. Then,
E [Rn ] ? 2M W (1 + 4dM 2 /r2 ) .
Proof. Let V = {? | k? ? ?k? ? r}. Note that the epigraph of the function ? is a polyhedral cone.
Since ? is differentiable in V , {(?, ?(?)) | ? ? V } is a subset of a linear subspace. Therefore, for
??t , ??t?1 ? V , wt+1 = wt . Hence, by Theorem 3.9,
!
n
n
X
X
E [Rn ] ? 2M W
Pr(??t , ??t?1 ?
/ V ) ? 4M W 1 +
Pr(??t ?
/ V) .
t=1
t=1
On the other hand, note that kfi k? ? M . Then
t
!
d
1 X
X
fi ? ?
? r ?
Pr
Pr(??t ?
/ V ) = Pr
t
i=1
j=1
?
t
!
1 X
tr 2
fi,j ? ?j ? r ? 2de? 2M 2 ,
t
i=1
where
is due to Hoeffding?s inequality. Now, using that for ? > 0,
Rn
Pn the last inequality
1
2 2
t=1 exp(??t) ? 0 exp(??t)dt ? ? , we get E [Rn ] ? 2M W (1 + 4dM /r ).
The condition that ? is differentiable for any ? such that k? ? ?k? ? r is equivalent to that ? is
differentiable at ?. By Proposition 2.1, this condition requires that at ?, maxw?W hw, ?i has a unique
optimizer. Note that the volume of the set of vectors ? with multiple optimizers is zero.
4
An adaptive algorithm for the linear game
While as shown in Theorem 3.3, FTL can exploit the curvature of the surface of the constraint set
to achieve O(log n) regret, it requires the curvature condition and mint k?t k2 ? L being bounded
away from zero, or it may suffer even linear regret. On the other hand, many algorithms, such as?the
"Follow the regularized leader" (FTRL) algorithm, are known to achieve a regret guarantee of O( n)
even for the worst-case data in the linear setting. This raises the question whether one can have an
algorithm that can achieve constant or O(log
? n) regret in the respective settings of Corollary 3.10
or Theorem 3.3, while it still maintains O( n) regret for worst-case data. One way to design an
adaptive algorithm is to use the (A, B)-prod algorithm of Sani et al. [2014], leading to the following
result:
Proposition 4.1. Consider (A, B)-prod of Sani et al. [2014], where algorithm A is chosen to be
FTRL with an appropriate regularization term, while B is chosen to be FTL. Then the regret of the
resulting hybrid algorithm H enjoys the following guarantees:
? If FTL achieves constant regret as in the setting of Corollary 3.10, then the regret of H is
also constant.
? If FTL achieves a regret of O(log n) as in the setting of Theorem 3.3, then the regret of H is
also O(log n).
?
? Otherwise, the regret of H is at most O( n log n).
5
Conclusion
FTL is a simple method that is known to perform well in many settings, while existing worst-case
results fail to explain its good performance. While taking a thorough look at why and when FTL can
be expected to achieve small regret, we discovered that the curvature of the boundary of the constraint
and having average loss vectors bounded away from zero help keep the regret of FTL small. These
conditions are significantly different from previous conditions on the curvature of the loss functions
which have been considered extensively in the literature. It would be interesting to further investigate
this phenomenon for other algorithms or in other learning settings.
8
Acknowledgements
This work was supported in part by the Alberta Innovates Technology Futures through the Alberta
Ingenuity Centre for Machine Learning and by NSERC. During part of this work, T. Lattimore was
with the Department of Computing Science, University of Alberta.
References
Y. Abbasi-Yadkori. Forced-exploration based algorithms for playing in bandits with large action sets. Library
and Archives Canada, 2010.
J. Abernethy, P.L. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax lower bounds for online
convex games. In 21st Annual Conference on Learning Theory (COLT), 2008.
P.L. Bartlett, E. Hazan, and A. Rakhlin. Adaptive online gradient descent. In Advances in Neural Information
Processing Systems (NIPS), pages 65?72, 2007.
D. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1999.
N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY,
USA, 2006.
N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. IEEE
Trans. Information Theory, 50(9):2050?2057, 2004.
D.J. Foster, A. Rakhlin, and K. Sridharan. Adaptive online learning. In Advances in Neural Information
Processing Systems (NIPS), pages 3357?3365, 2015.
Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting.
Journal of Computer and System Sciences, 55:119?139, 1997.
A.A. Gaivoronski and F. Stella. Stochastic nonstationary optimization for finding universal portfolios. Annals of
Operations Research, 100(1?4):165?188, 2000.
D. Garber and E. Hazan. Faster rates for the frank-wolfe method over strongly-convex sets. In Proceedings of
the 32nd International Conference on Machine Learning (ICML), volume 951, pages 541?549, 2015.
E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine
Learning, 69(2-3):169?192, 2007.
R. Huang, T. Lattimore, A. Gy?rgy, and Cs. Szepesv?ri. Following the leader and fast rates in linear prediction:
Curved constraint sets and other regularities. arXiv, 2016.
S. M. Kakade and S. Shalev-Shwartz. Mind the duality gap: Logarithmic regret algorithms for online optimization.
In Advances in Neural Information Processing Systems (NIPS), pages 1457?1464, 2009.
W. Kot?owski. Minimax strategy for prediction with expert advice under stochastic assumptions. Algorithmic
Learning Theory (ALT), 2016.
E.S. Levitin and B.T. Polyak. Constrained minimization methods. USSR Computational Mathematics and
Mathematical Physics, 6(5):1?50, 1966.
H.B. McMahan. Follow-the-regularized-leader and mirror descent: Equivalence theorems and implicit updates.
arXiv, 2010. URL http://arxiv.org/abs/1009.3240.
N. Merhav and M. Feder. Universal sequential learning and decision from individual data sequences. In 5th
Annual ACM Workshop on Computational Learning Theory (COLT), pages 413?427. ACM Press, 1992.
F. Orabona, N. Cesa-Bianchi, and C. Gentile. Beyond logarithmic bounds in online learning. In Proceedings of
the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 823?831,
2012.
A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In 26th Annual Conference on
Learning Theory (COLT), pages 993?1019, 2013.
A. Sani, G. Neu, and A. Lazaric. Exploiting easy data in online optimization. In Advances in Neural Information
Processing Systems (NIPS), pages 810?818, 2014.
R. Schneider. Convex Bodies: The Brunn?Minkowski Theory. Encyclopedia of Mathematics and its Applications.
Cambridge Univ. Press, 2nd edition, 2014.
S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and trends in Machine
Learning, 4(2):107?194, 2012.
S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge
University Press, New York, NY, USA, 2014.
T. van Erven, P. Gr?nwald, N. Mehta, M. Reid, and R. Williamson. Fast rates in statistical and online learning.
Journal of Machine Learning Research (JMLR), 16:1793?1861, 2015. Special issue in Memory of Alexey
Chervonenkis.
9
| 6455 |@word innovates:1 version:4 norm:6 seems:1 replicate:1 nd:2 open:1 mehta:1 simulation:1 attainable:2 pick:4 thereby:1 tr:1 ftrl:2 selecting:2 chervonenkis:1 ours:1 erven:2 existing:1 com:1 nt:1 od:1 gmail:1 yet:1 bd:23 fn:1 belmont:1 shape:2 remove:1 update:1 intelligence:1 selected:2 parameterization:1 plane:5 realizing:2 core:1 short:1 boosting:1 contribute:1 successive:1 org:1 unbounded:1 mathematical:1 along:1 kvk2:1 direct:3 differential:1 prove:5 polyhedral:4 manner:3 x0:3 indeed:1 expected:5 ingenuity:1 growing:1 owski:2 alberta:5 little:1 struction:1 spain:1 bounded:7 moreover:1 underlying:1 qw:1 what:3 argmin:2 hindsight:1 indiana:1 csaba:1 finding:2 guarantee:4 thorough:2 every:1 act:1 concave:2 growth:1 k2:31 uk:2 control:1 unit:6 enjoy:2 arguably:2 bertsekas:2 positive:4 before:2 engineering:1 local:1 reid:1 sd:3 noteworthy:1 lugosi:4 alexey:1 twice:1 studied:1 equivalence:1 suggests:2 co:9 kfi:3 unique:3 regret:51 definite:1 minv:1 optimizers:1 empirical:1 lucky:1 universal:2 significantly:1 weather:1 matching:2 projection:2 regular:1 inition:1 suggest:1 get:6 onto:2 convenience:1 cannot:1 operator:1 risk:1 applying:1 equivalent:4 map:2 demonstrated:1 missing:1 dz:2 straightforward:2 kale:1 independently:1 convex:33 spanned:1 deft:1 traditionally:1 variation:1 annals:1 pt:2 ualberta:2 programming:1 origin:1 wolfe:2 element:1 oad:1 trend:1 predicts:2 ft:6 min1:1 electrical:1 capture:1 worst:6 hv:19 calculate:1 region:1 momentarily:1 observes:1 yk:1 environment:3 convexity:1 ui:1 predictable:1 dom:3 depend:1 tight:1 raise:1 f2:2 learner:6 sani:4 triangle:2 brunn:1 epi:3 forced:1 fast:5 describe:1 london:1 univ:1 artificial:2 shalev:6 neighborhood:1 abernethy:1 garber:3 valued:1 say:1 snap:1 otherwise:1 ability:1 statistic:1 think:1 itself:1 online:24 sequence:7 differentiable:13 eigenvalue:5 maximal:1 combining:2 achieve:7 adjoint:1 intuitive:2 rgy:2 az:2 exploiting:2 regularity:6 empty:4 generating:2 ben:2 tk:1 help:4 illustrate:2 develop:1 ac:1 measured:1 school:1 c:1 come:1 convention:1 radius:2 closely:1 stochastic:12 exploration:1 fix:3 f1:1 suffices:3 preliminary:1 generalization:2 proposition:16 elementary:1 singularity:1 summation:2 adb:1 hold:5 sufficiently:1 around:1 considered:1 normal:2 exp:4 mapping:1 algorithmic:1 stabilizes:1 tor:2 achieves:5 optimizer:3 smallest:2 failing:1 hwt:6 applicable:1 largest:1 reflects:1 minimization:2 clearly:1 rather:3 pn:7 corollary:5 focus:1 finishing:1 polyhedron:4 adversarial:1 defend:1 abstraction:1 dependent:2 minimizers:1 a0:1 initially:1 bandit:2 hu0:2 issue:3 supw:2 overall:2 dual:1 colt:3 socalled:1 ussr:1 constrained:1 special:1 homogenous:1 equal:1 having:1 adversarially:1 kw:1 look:3 icml:1 future:1 simplex:2 opening:1 simultaneously:1 divergence:3 kwt:1 individual:1 replaced:1 geometry:1 connects:2 ab:3 unimprovable:1 possibility:2 highly:1 investigate:1 devoted:3 conthe:1 bregman:3 beforehand:1 lh:1 minw:2 respective:1 euclidean:1 desired:1 theoretical:1 minimal:2 vertex:3 subset:3 uniform:1 submanifold:1 gr:1 adaptively:1 st:1 fundamental:2 sensitivity:1 international:2 probabilistic:1 physic:1 informatics:1 connecting:1 quickly:1 continuously:1 nicer:1 w1:3 again:1 abbasi:1 squared:1 cesa:7 opposed:1 huang:4 choose:2 hoeffding:1 henceforth:1 worse:1 admit:1 expert:3 derivative:1 leading:1 de:1 gy:2 matter:1 depends:1 ad:2 piece:1 later:2 picked:1 closed:3 hazan:7 sup:1 characterizes:1 start:1 kwk:1 maintains:1 parallel:4 contribution:1 who:2 ruitong:2 critically:1 multiplying:1 explain:2 suffers:2 neu:1 definition:3 dm:2 proof:14 proved:1 ask:2 recall:2 ut:5 attained:1 dt:1 follow:7 planar:1 though:2 strongly:8 just:1 implicit:2 d:4 hand:2 nonlinear:4 lack:2 perhaps:3 indicated:1 scientific:1 believe:1 building:1 usa:3 effect:2 concept:2 true:1 normalized:2 k22:4 spell:1 hence:6 equality:1 regularization:1 nonzero:1 round:5 during:2 self:2 uniquely:2 forte:1 szepesva:1 game:4 noted:1 cosine:4 theoretic:1 passive:1 lattimore:4 recently:2 fi:7 common:1 exponentially:1 volume:2 discussed:2 belong:1 kwk2:2 cambridge:3 rd:16 mathematics:2 similarly:4 centre:1 portfolio:1 kw1:2 surface:4 curvature:23 recent:1 showed:1 optimizing:1 inf:2 mint:2 certain:1 meta:1 binary:1 inequality:7 vt:5 supw1:2 exploited:1 gyorgy:1 gentile:2 schneider:3 u0:2 relates:1 multiple:4 nwald:1 smooth:2 technical:1 faster:4 calculation:1 long:4 sphere:1 equally:1 plugging:1 prediction:10 argmaxhw:1 essentially:2 expectation:2 fifteenth:1 arxiv:3 agarwal:1 achieved:1 szepesv:2 ftl:39 subdifferential:1 hft:4 w2:4 archive:1 elegant:1 sridharan:3 nonstationary:1 easy:2 identically:1 switch:1 finish:2 polyak:3 reduce:1 maxf:2 whether:5 expression:1 bartlett:3 feder:3 url:1 suffer:1 interpolates:1 hessian:1 passing:1 cause:1 action:1 york:2 generally:1 tewari:1 encyclopedia:1 extensively:1 differentiability:1 simplest:4 reduced:1 schapire:2 http:1 exist:2 andr:1 problematic:1 overly:1 lazaric:1 levitin:3 shall:1 achieving:1 imperial:2 rewriting:1 v1:2 uw:6 graph:1 geometrically:1 year:1 cone:3 angle:6 electronic:1 vn:1 decision:3 bound:26 nonnegative:3 annual:3 nontrivial:1 constraint:13 ri:2 x2:1 flat:1 u1:2 speed:3 argument:1 min:3 subgradients:1 minkowski:1 department:3 alternate:1 ball:4 smaller:4 wi:3 kakade:2 appealing:1 tw:2 making:1 argmaxw:8 pr:5 heart:1 ln:9 previously:1 remains:2 mechanism:2 hh:1 fail:1 mind:1 letting:1 serf:1 studying:2 available:1 operation:1 away:6 appropriate:1 batch:1 yadkori:1 denotes:2 opportunity:1 lock:1 exploit:1 giving:1 k1:1 thwt:2 objective:2 move:3 question:2 looked:1 strategy:5 exhibit:1 amongst:1 gradient:2 subspace:1 distance:1 gaivoronski:3 parametrized:1 outer:2 w0:2 oa:2 athena:1 argue:1 trivial:1 length:4 abbasiyadkori:1 index:1 illustration:2 ellipsoid:1 minimizing:1 merhav:3 statement:2 frank:2 negative:1 rise:1 tightening:1 danskin:1 design:3 perform:2 bianchi:7 allowing:2 conversion:1 observation:1 upper:2 finite:3 descent:2 curved:8 defining:2 ever:1 precise:1 rn:20 discovered:1 arbitrary:2 canada:3 david:2 complement:1 vacuous:2 connection:1 kf1:1 barcelona:1 nip:5 trans:1 able:1 suggested:1 beyond:1 below:1 ku0:2 kot:2 max:2 memory:1 analogue:1 shifting:1 critical:1 suitable:1 event:1 hybrid:1 regularized:2 predicting:1 minimax:5 technology:1 library:1 stella:3 extract:1 speeding:1 text:1 understanding:2 literature:2 geometric:1 tangent:1 kf:2 acknowledgement:1 asymptotic:1 freund:2 loss:33 expect:3 hwn:2 reordering:1 sublinear:1 interesting:1 proven:1 h2:1 foundation:1 incurred:1 foster:2 playing:1 cd:1 course:1 supported:1 last:3 soon:1 free:1 keeping:1 enjoys:5 side:1 allow:1 taking:2 van:2 distributed:1 boundary:7 dimension:1 curve:6 valid:1 collection:1 adaptive:5 far:1 excess:4 compact:7 keep:5 leader:9 shwartz:6 xi:1 un:1 prod:3 why:1 nature:1 ca:2 interact:1 williamson:1 excellent:1 necessarily:1 complex:1 domain:4 aistats:1 main:3 motivation:1 bounding:2 edition:1 n2:1 positively:1 body:5 advice:2 epigraph:3 fig:2 fashion:1 ny:2 explicit:1 exponential:1 lie:1 mcmahan:2 tied:1 shalevshwartz:1 kxk2:1 lw:2 crude:1 jmlr:1 hw:20 theorem:11 formula:4 showing:3 rakhlin:5 r2:1 alt:1 exists:3 workshop:1 sequential:2 mirror:1 kx:1 gap:1 easier:1 intersection:2 logarithmic:7 led:1 simply:1 contained:1 nserc:1 conconi:1 maxw:3 corresponds:1 satisfies:7 acm:2 ma:1 orabona:2 hard:2 fw:4 included:1 determined:2 reducing:1 wt:44 hyperplane:1 lemma:7 conservative:2 total:1 called:4 principal:10 duality:1 gauss:1 college:1 support:7 brevity:1 dept:1 phenomenon:1 |
6,031 | 6,456 | Multi-view Anomaly Detection via Robust
Probabilistic Latent Variable Models
Tomoharu Iwata
NTT Communication Science Laboratories
iwata.tomoharu@lab.ntt.co.jp
Makoto Yamada
Kyoto University
makoto.m.yamada@ieee.org
Abstract
We propose probabilistic latent variable models for multi-view anomaly detection, which is the task of finding instances that have inconsistent views given
multi-view data. With the proposed model, all views of a non-anomalous instance
are assumed to be generated from a single latent vector. On the other hand, an
anomalous instance is assumed to have multiple latent vectors, and its different
views are generated from different latent vectors. By inferring the number of latent vectors used for each instance with Dirichlet process priors, we obtain multiview anomaly scores. The proposed model can be seen as a robust extension of
probabilistic canonical correlation analysis for noisy multi-view data. We present
Bayesian inference procedures for the proposed model based on a stochastic EM
algorithm. The effectiveness of the proposed model is demonstrated in terms of
performance when detecting multi-view anomalies.
1 Introduction
There has been great interest in multi-view learning, in which data are obtained from various information sources. In a wide variety of applications, data are naturally comprised of multiple views.
For example, an image can be represented by color, texture and shape information; a web page can
be represented by words, images and URLs occurring on in the page; and a video can be represented
by audio and visual features. In this paper, we consider the task of finding anomalies in multi-view
data. The task is called horizontal anomaly detection [13], or multi-view anomaly detection [16].
Anomalies in multi-view data are instances that have inconsistent features across multiple views.
Multi-view anomaly detection can be used for many applications, such as information disparity management [9], purchase behavior analysis [13], malicious insider detection [16], and user aggregation
from multiple databases. In information disparity management, multiple views can be obtained from
documents written in different languages such as Wikipedia. Multi-view anomaly detection tries to
find documents that contain different information across different languages, which would be helpful for editors to select documents to be updated, or beneficial for cultural anthropologists to analyze
social difference across different languages. In purchase behavior analysis, multiple views for each
item can be defined as its genre and its purchase history, i.e. a set of users who purchased the item.
Multi-view anomaly detection can find movies inconsistently purchased by users based on the movie
genre, which would assist creating marketing strategies.
Multi-view anomaly detection is different from standard (single-view) anomaly detection. Singleview anomaly detection finds instances that do not conform to expected behavior [6]. Figure 1 (a)
shows the difference between a multi-view anomaly and a single-view anomaly in a two-view data
set. ?M? is a multi-view anomaly since ?M? belongs to different clusters in different views (?A?D?
cluster in View 1 and ?E?J? cluster in View 2) and views of ?M? are not consistent. ?S? is a singleview anomaly since ?S? is located far from other instances in each view. However, both views of
?S? have the same relationship with the others (they are far from the other instances), and then ?S?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Latent space
BA
MC
D
F GI M
J
HE
S
W1
BD
MC
A
?
? W2
a
b
?
w
S
F GI
J
H E
?
M
S
Observed view 1
I EH
J G
F
?
s
x
D
z
?N
r
AD
C
B
Observed view 2
(a)
(b)
Figure 1: (a) A multi-view anomaly ?M? and a single-view anomaly ?S? in a two-view data set. Each
letter represents an instance, and the same letter indicates the same instance. Wd is a projection
matrix for view d. (b) Graphical model representation of the proposed model.
is not a multi-view anomaly. Single-view anomaly detection methods, such as one-class support
vector machines [18] or tensor-based anomaly detection [11], consider that ?S? is anomalous. On
the other hand, we would like to develop a multi-view anomaly detection method that detects ?M? as
anomaly, but not ?S?. Note that although single-view anomalies are uncommon instances, multi-view
anomalies can be majority if they are inconsistent across multiple views.
We propose a probabilistic latent variable model for multi-view anomaly detection. With the proposed model, there is a latent space that is shared across all views. We assume that all views of a
non-anomalous (normal) instance are generated using a single latent vector. On the other hand, an
anomalous instance is assumed to have multiple latent vectors, and its different views are generated
using different latent vectors, which indicates inconsistency across different views of the instance.
Figure 1 (a) shows an example of a latent space shared by the two-view data. Two views of every
non multi-view anomaly can be generated from a latent vector using view-dependent projection matrices. On the other hand, since two views of multi-view anomaly ?M? are not consistent, two latent
vectors are required to generate the two views using the projection matrices.
Since the number of latent vectors for each instance is unknown, we automatically infer it from the
given data by using Dirichlet process priors. The inference of the proposed model is based on a
stochastic EM algorithm. In the E-step, a latent vector is assigned for each view of each instance
using collapsed Gibbs sampling while analytically integrating out latent vectors. In the M-step,
projection matrices for mapping latent vectors into observations are estimated by maximizing the
joint likelihood. By alternately iterating E- and M-steps, we infer the number of latent vectors used
in each instance and calculate its anomaly score from the probability of using more than one latent
vector.
2 Proposed Model
D
Suppose that we are given N instances with D views X = {Xn }N
n=1 , where Xn = {xnd }d=1
Md
is a set of multi-view observation vectors for the nth instance, and xnd ? R
is the observation
vector of the dth view. The task is to find anomalous instances that have inconsistent observation
features across multiple views. We propose a probabilistic latent variable model for this task. The
proposed model assumes that each instance has potentially a countably infinite number of latent
K
vectors Zn = {znj }?
j=1 , where znj ? R . Each view of an instance xnd is generated depending
on a view-specific projection matrix Wd ? RMd ?K and a latent vector znsnd that is selected from
a set of latent vectors Zn . Here, snd ? {1, ? ? ? , ?} is the latent vector assignment of xnd . When
the instance is non-anomalous and all its views are consistent, all of the views are generated from
a single latent vector. In other words, the latent vector assignments for all views are the same,
sn1 = sn2 = ? ? ? = snD . When it is an anomaly and some views are inconsistent, different views
2
are generated from different latent vectors, and some latent vector assignments are different, i.e.
snd 6= snd? for some d 6= d? .
Specifically, the proposed model is an infinite mixture model, where the probability for the dth view
of the nth instance is given by
p(xnd |Zn , Wd , ?n , ?) =
?
X
j=1
?nj N (xnd |Wd znj , ??1 I),
(1)
where ?n = {?nj }?
j=1 are the mixture weights, ?nj represents the probability of choosing the jth
latent vector, ? is a precision parameter, N (?, ?) denotes the Gaussian distribution with mean ?
and covariance matrix ?, and I is the identity matrix. Information of non-anomalous instances that
cannot be handled by a single latent vector is modeled in Gaussian noise which is controlled by ?.
Since we assume the same observation noise ? across different views, the observations need to be
normalized. We use a Dirichlet process for the prior of mixture weight ?n . Its use enables us to
automatically infer the number of latent vectors for each instance from the given data.
The complete generative process of the proposed model for multi-view instances X is as follows,
1. Draw a precision parameter ? ? Gamma(a, b)
2. For each instance: n = 1, . . . , N
(a) Draw mixture weights ?n ? Stick(?)
(b) For each latent vector: j = 1, . . . , ?
i. Draw a latent vector znj ? N (0, (?r)?1 I)
(c) For each view: d = 1, . . . , D
i. Draw a latent vector assignment snd ? Discrete(?n )
ii. Draw an observation vector xnd ? N (Wd znsnd , ??1 I)
Here, Stick(?) is the stick-breaking process [19] that generates mixture weights for a Dirichlet
process with concentration parameter ?, and r is the relative precision for latent vectors. ? is shared
for observation and latent vector precision because it makes it possible to analytically integrate out ?
as shown in (4). Figure 1 (b) shows a graphical model representation of the proposed model, where
the shaded and unshaded nodes indicate observed and latent variables, respectively.
N
The joint probability of the data X and the latent vector assignments S = {{snd }D
d=1 }n=1 is given
by
p(X, S|W , a, b, r, ?) = p(S|?)p(X|S, W , a, b, r),
(2)
where W = {Wd }D
d=1 . Because we use conjugate priors, we can analytically integrate out mixture
weights ? = {?n }N
n=1 , latent vectors Z, and precision parameter ?. Here, we use a Dirichlet
process prior for multinomial parameter ?n , and a Gaussian-Gamma prior for latent vector znj . By
integrating out mixture weights ?, the first factor is calculated by
Q n
N
Y
? Jn Jj=1
(Nnj ? 1)!
p(S|?) =
,
(3)
?(? + 1) ? ? ? (? + D ? 1)
n=1
where Nnj represents the number of views assigned to the jth latent vector in the nth instance, and
Jn is the number of latent vectors of the nth instance for which Nnj > 0. By integrating out latent
vectors Z and precision parameter ?, the second factor of (2) is calculated by
p(X|S, W , a, b, r) = (2?)?
N
P
d Md
2
r
K
P
n Jn
2
N Jn
1
ba ?(a? ) Y Y
|Cnj | 2 ,
?
?a
b ?(a) n=1 j=1
(4)
where
?
a =a+
N
PD
d=1
2
Md
,
b? = b +
N D
N Jn
1 XX ?
1 XX
?? C ?1 ?nj ,
xnd xnd ?
2 n=1
2 n=1 j=1 nj nj
d=1
3
(5)
?nj = Cnj
X
?1
Cnj
=
Wd? xnd ,
d:snd =j
X
Wd? Wd + rI.
(6)
d:snd =j
The posterior for the precision parameter ? and that for the latent vector znj are given by
p(?|X, S, W , a, b) = Gamma(a? , b? ),
p(znj |X, S, W , r) = N (?nj , ??1 Cnj ),
(7)
respectively.
3 Inference
We describe inference procedures for the proposed model based on a stochastic EM algorithm, in
which collapsed Gibbs sampling of latent vector assignments S and the maximum joint likelihood
estimation of projection matrices W are alternately iterated while analytically integrating out the
latent vectors Z, mixture weights ? and precision parameter ?. By integrating out latent vectors,
we do not need to explicitly infer the latent vectors, leading to a robust and fast-mixing inference.
Let ? = (n, d) be the index of the dth view of the nth instance for notational convenience. In the
E-step, given the current state of all but one latent assignment s? , a new value for s? is sampled from
{1, ? ? ? , Jn\? + 1} according to the following probability,
p(s? = j|X, S\? , W , a, b, r, ?) ?
p(s? = j, S\? |?) p(X|s? = j, S\? , W , a, b, r)
?
,
p(S\? |?)
p(X\? |S\? , W , a, b, r)
(8)
where \? represents a value or set excluding the dth view of the nth instance. The first factor is given
by
(
Nnj\?
p(s? = j, S\? |?)
if j ? Jn\?
D?1+?
=
(9)
?
if j = Jn\? + 1,
p(S\? |?)
D?1+?
using (3), where j ? Jn\? is for existing latent vectors, and j = Jn\? + 1 is for a new latent vector.
By using (4), the second factor is given by
?a?
b\?\? ?(a?s? =j ) |Cj,s? =j | 21
Md
p(X|s? = j, S\? , W , a, b, r)
K
= (2?)? 2 rI(j=Jn\? +1) 2 ?a?
,
s? =j
p(X\? |S\? , W , a, b, r)
?(a?\? ) |Cj\? | 12
bs? =j
(10)
where I(?) represents the indicator function, i.e. I(A) = 1 if A is true and 0 otherwise, and subscript
s? = j indicates the value when x? is assigned to the jth latent vector as follows,
1
1
1
b?s? =j = b?\? + x?
x? + ??
C ?1 ?nj\? ? ??
C ?1
?nj,s? =j ,
2 ?
2 nj\? nj\?
2 nj,s? =j nj,s? =j
a?s? =j = a? ,
(11)
?1
?nj,s? =j = Cnj,s? =j (Wd? x? + Cnj\?
?nj\? ),
(12)
?1
?1
Cnj,s
= Wd? Wd + Cnj\?
.
? =j
(13)
Intuitively, if the current view cannot be modeled well by existing latent vectors, a new latent vector
is used, which indicates that the view is inconsistent with the other views.
In the M-step, the projection matrices W are estimated by maximizing the logarithm of the joint
likelihood (2) while fixing cluster assignment variables S. By setting the gradient of the joint log
likelihood with respect to W equal to zero, an estimate of W is obtained as follows,
Wd =
N
a? X
b?
n=1
xnd ??
nsnd
N X
Jn
X
n=1 j=1
Cnj +
N
?1
a? X
?
?
?
.
nsnd nsnd
b? n=1
(14)
When we iterate the E-step that samples the latent vector assignment snd by employing (8) for
each view d = 1, . . . , D in each instance n = 1, . . . , N , and the M-step that maximizes the joint
likelihood using (14) with respect to the projection matrix Wd for each view d = 1, . . . , D, we
obtain an estimate of the latent vector assignments and projection matrices.
4
In Section 2, we defined that an instance is an anomaly when its different views are generated from
different latent vectors. Therefore, for an anomaly score, we use the probability that the instance
uses more than one latent vector. It is estimated by using the samples obtained in the inference as
PH
(h)
(h)
follows, vn = H1 h=1 I(Jn > 1), where Jn is the number of latent vectors used by the nth
instance in the hth iteration of the Gibbs sampling after the burn-in period, and H is the number
of the iterations. The output of the proposed method is a ranked list of anomalies based on their
anomaly scores. An analyst would investigate top few anomalies, or use a threshold to select the
anomalies [6]. The threshold can be determined based on a targeted false alarm and detection rate.
We can use cross-validation to select an appropriate dimensionality for the latent space K. With
cross-validation, we assume that some features are missing from the given data, and infer the model
with a different K. Then, we select the smallest K value that has performed the best at predicting
missing values.
4 Related Work
Anomaly detection has had a wide variety of applications, such as credit card fraud detection [1],
intrusion detection for network security [17], and analysis for healthcare data [3]. However, most
existing anomaly detection techniques assume data with a single view, i.e. a single observation
feature set.
A number of anomaly detection methods for two-view data have been proposed [12, 20?22, 24].
However, they cannot be used for data with more than two views. Gao et al. [13] proposed a
HOrizontal Anomaly Detection algorithm (HOAD) for finding anomalies from multi-view data. In
HOAD, there are hyperparameters including a weight for the constraint that require the data to be
labeled as anomalous or not for tuning, and the performance is sensitive to the hyperparameters. On
the other hand, the parameters with the proposed model can be estimated from the given multi-view
data without label information by maximizing the likelihood. In addition, because the proposed
model is a probabilistic generative model, we can extend it in a probabilistically principled manner,
for example, for handling missing data and combining with other probabilistic models.
Liu and Lam [16] proposed multi-view anomaly detection methods using consensus clustering. They
found anomalies based on the inconsistency of clustering results across multiple views. Therefore,
they cannot find inconsistency within a cluster. Christoudias et al. [8] proposed a method for filtering
instances that are corrupted by background noise from multi-view data. The multi-view anomalies
considered in this paper include not only instances corrupted by background noise but also instances
categorized into different foreground classes across views, and instances with inconsistent views
even if they belong to the same cluster. Recently, Alvarez et al. [2] proposed a multi-view anomaly
detection method. However, since the method is based on clustering, it cannot find anomalies when
there are no clusters in the given data.
The proposed model is a generalization of either probabilistic principal component analysis
(PPCA) [23] or probabilistic canonical correlation analysis (PCCA) [5]. When all views are generated from different latent vectors for every instance, the proposed model corresponds to PPCA
that is performed independently for each view. When all views are generated from a single latent
vector for every instance, the proposed model corresponds to PCCA with spherical noise.
PCCA, or canonical correlation analysis (CCA), can be used for multi-view anomaly detection. With
PCCA, a latent vector that is shared by all views for each instance and a linear projection matrix for
each view are estimated by maximizing the likelihood, or minimizing the reconstruction error of the
given data. The reconstruction error for each instance can be used as an anomaly score. However, the
reconstruction errors are not reliable because they are calculated from parameters that are estimated
using data with anomalies by assuming that all of the instances are non-anomalous. On the other
hand, because the proposed model simultaneously estimates the parameters and infers anomalies,
the estimated parameters are not contaminated by the anomalies. With PPCA and PCCA, Gaussian
distributions are used for observation noise, which are sensitive to atypical observations. Robust
PPCA and PCCA [4] use Student-t distributions instead of Gaussian distributions, which are stable
to data containing single-view anomalies. The proposed model assumes Gaussian observation noise,
and its precision is parameterized by a Gamma distributed variable ?. Since we marginalize out ?
in the inference as written in (4), the observation noise becomes a Student-t distribution. Therefore,
the proposed model is robust to single-view anomalies.
5
With some CCA-related methods, each latent vector is factorized into shared and private components
across different views [10]. They assume that every instance has shared and private parts that are the
same dimensionality for all instances. In contrast, the proposed model assumes that non-anomalous
instances have only shared latent vectors, and anomalies have private latent vectors. The proposed
model can be seen as CCA with private latent vectors, where latent vectors across views are clustered for each instance. When CCA with private latent vectors are inferred without clustering, the
inferred private latent vectors do not become the same even if it is generated from a single latent vector, because switching latent dimension or rotating the latent space does not change the likelihood.
Therefore, difference of the latent vectors cannot be used for multi-view anomaly detection.
5 Experiments
Data We evaluated the proposed model quantitatively by using 11 data sets, which we obtained
from the LIBSVM data sets [7]. We generated two views by randomly splitting the features, where
each feature can belong to only a single view, and anomalies were added by swapping views of
two randomly selected instances regardless of their class labels for each view. Splitting data does
not generate anomalies. Therefore, we can evaluate methods while controlling the anomaly rate
properly. By swapping, although single-view anomalies cannot be created since the distribution for
each view does not change, multi-view anomalies are created.
Comparing methods We compared the proposed model with probabilistic canonical correlation
analysis (PCCA), horizontal anomaly detection (HOAD) [13], consensus clustering based anomaly
detection (CC) [16], and one-class support vector machine (OCSVM) [18]. For PCCA, we used
the proposed model in which the number of latent vectors was fixed at one for every instance. The
anomaly scores obtained with PCCA were calculated based on the reconstruction errors. HOAD requires to select an appropriate hyperparameter value for controlling the constraints whereby different
views of the same instance are embedded close together. We ran HOAD with different hyperparameter settings {0.1, 1, 10, 100}, and show the results that achieved the highest performance for each
data set. For CC, first we clustered instances for each view using spectral clustering. We set the
number of clusters at 20, which achieved a good performance in preliminary experiments. Then, we
calculated anomaly scores by the likelihood of consensus clustering when an instance was removed
since it indicates inconsistency of the instance across different views. OCSVM is a representative
method for single-view anomaly detection. To investigate the performance of a single-view method
for multi-view anomaly detection, we included OCSVM as a comparison method. For OCSVM,
multiple views are concatenated in a single vector, then use it for the input. We used Gaussian kernel. In the proposed model, we used ? = 1, a = 1, and b = 1 for all experiments. The number of
iterations for the Gibbs sampling was 500, and the anomaly score was calculated by averaging over
the multiple samples.
Multi-view anomaly detection For the evaluation measurement, we used the area under the ROC
curve (AUC). A higher AUC indicates a higher anomaly detection performance. Figure 2 shows
AUCs with different rates of anomalies using 11 two-view data sets, which are averaged over 50
experiments. For the dimensionality of the latent space, we used K = 5 for the proposed model,
PCCA, and HOAD. In general, as the anomaly rate increases, the performance decreases. The
proposed model achieved the best performance with eight of the 11 data sets. This result indicates
that the proposed model can find anomalies effectively by inferring a number of latent vectors for
each instance. The performance of CC was low because it assumes that there are clusters for each
view, and it cannot find anomalies within clusters. The AUC of OCSVM was low, because it is a
single-view anomaly detection method, which considers instances anomalous that are different from
others within a single view. Multi-view anomaly detection is the task to find instances that have
inconsistent features across views, but not inconsistent features within a view. The computational
time needed for PCCA was 2 sec, and that needed for the proposed model was 35 sec with wine
data.
Figure 3 shows AUCs with different dimensionalities of latent vectors using data sets whose anomaly
rate is 0.4. When the dimensionality was very low (K = 1 or 2), the AUC was low in most of the data
sets, because low-dimensional latent vectors cannot represent the observation vectors well. With all
the methods, the AUCs were relatively stable when the latent dimensionality was higher than four.
6
(a) breast-cancer
(b) diabetes
0.7
(c) glass
0.6
Proposed
PCCA
HOAD
CC
OCSVM
0.65
0.58
0.65
AUC
AUC
AUC
0.6
0.56
0.54
0.6
0.55
0.52
0.55
0.4
0.6
anomaly rate
0.2
0.8
(d) heart
0.4
0.6
anomaly rate
0.8
0.2
(e) ionosphere
0.4
0.6
anomaly rate
0.8
(f) sonar
(g) svmguide2
0.85
0.8
0.6
AUC
0.56
AUC
0.75
0.58
0.7
0.65
0.54
0.6
0.52
0.55
0.9
0.56
0.8
0.54
AUC
0.62
AUC
0.5
0.5
0.2
0.7
0.52
0.5
0.6
0.2
0.4
0.6
anomaly rate
0.48
0.8
0.2
(h) svmguide4
0.4
0.6
anomaly rate
0.8
0.2
0.4
0.6
anomaly rate
(i) vehicle
0.2
0.8
(j) vowel
0.4
0.6
anomaly rate
0.8
(k) wine
0.8
0.85
0.75
0.8
0.7
0.7
0.7
0.75
AUC
AUC
AUC
0.8
0.7
0.55
0.4
0.6
anomaly rate
0.65
0.6
0.55
0.6
0.2
0.65
0.6
0.65
0.6
AUC
0.9
0.75
0.55
0.5
0.8
0.2
0.4
0.6
anomaly rate
0.8
0.2
0.4
0.6
anomaly rate
0.2
0.8
0.4
0.6
anomaly rate
0.8
Figure 2: Average AUCs with different anomaly rates, and their standard errors. A higher AUC is
better.
(a) breast-cancer
(b) diabetes
(c) glass
0.58
0.6
0.65
0.6
AUC
AUC
0.56
AUC
Proposed
PCCA
HOAD
0.62
0.54
0.56
0.54
0.52
0.55
0.58
0.52
0.5
2
4
6
8
latent dimensionality
2
10
(d) heart
4
6
8
latent dimensionality
0.5
10
(e) ionosphere
0.6
4
6
8
latent dimensionality
10
(f) sonar
(g) svmguide2
0.55
0.9
0.8
0.58
0.7
AUC
0.56
AUC
0.8
AUC
AUC
2
0.7
0.5
0.54
0.6
0.6
0.52
0.5
4
6
8
latent dimensionality
10
2
(h) svmguide4
4
6
8
latent dimensionality
0.5
10
(i) vehicle
0.9
0.85
0.8
0.75
0.45
10
2
0.65
10
0.75
0.7
0.7
0.7
4
6
8
latent dimensionality
(k) wine
0.75
AUC
AUC
AUC
0.7
4
6
8
latent dimensionality
(j) vowel
0.8
0.65
0.65
0.6
0.6
0.55
0.55
0.6
0.6
0.55
0.5
2
AUC
2
0.5
2
4
6
8
latent dimensionality
10
2
4
6
8
latent dimensionality
10
2
4
6
8
latent dimensionality
10
2
4
6
8
latent dimensionality
10
Figure 3: Average AUCs with different dimensionalities of latent vectors, and their standard errors.
Single-view anomaly detection We would like to find multi-view anomalies, but woul not like to
detect single-view anomalies. We illustrated that the proposed model does not detect single-view
anomalies using synthetic single-view anomaly data. With the synthetic data, latent vectors for
7
Table 1: Average AUCs for single-view anomaly detection.
Proposed
0.117 ? 0.098
PCCA
0.174 ? 0.095
OCSVM
0.860 ? 0.232
Table 2: High and low anomaly score movies calculated by the proposed model.
Title
Score Title
Score
The Full Monty
0.98 Star Trek VI
0.04
Liar Liar
0.93 Star Trek III
0.04
The Professional
0.91 The Saint
0.04
Mr. Holland?s Opus 0.88 Heat
0.03
Contact
0.87 Conspiracy Theory 0.03
?
single-view anomalies were generated from N (0, 10I), and those for non-anomalous instances
were generated from N (0, I). Since each of the anomalies has only one single latent vector, it is
not a multi-view anomaly. The numbers of anomalous and non-anomalous instances were 5 and 95,
respectively. The dimensionalities of the observed and latent spaces were five and three, respectively.
Table 1 shows the average AUCs with the single-view anomaly data, which are averaged over 50
different data sets. The low AUC of the proposed model indicates that it does not consider singleview anomalies as anomalies. On the other hand, the AUC of the one-class SVM (OCSVM) was
high because OCSVM is a single-view anomaly detection method, and it leads to low multi-view
anomaly detection performance.
Application to movie data For an application of multi-view anomaly detection, we analyzed inconsistency between movie rating behavior and genre in MovieLens data [14]. An instance corresponds to a movie, where the first view represents whether the movie is rated or not by users, and the
second view represents the movie genre. Both views consist of binary features, where some movies
are categorized in multiple genres. We used 338 movies, 943 users and 19 genres. Table 2 shows
high and low anomaly score movies when we analyzed the movie data by the proposed method with
K = 5. ?The Full Monty? and ?Liar Liar? were categorized in ?Comedy? genre. They are rated
by not only users who likes ?Comedy?, but also who likes ?Romance? and ?Action-Thriller?. ?The
Professional? was anomaly because it was rated by two different user groups, where a group prefers
?Romance? and the other prefers ?Action?. Since ?Star Trek? series are typical Sci-Fi and liked by
specific users, its anomaly score was low.
6 Conclusion
We proposed a generative model approach for multi-view anomaly detection, which finds instances
that have inconsistent views. In the experiments, we confirmed that the proposed model could
perform much better than existing methods for detecting multi-view anomalies. There are several
avenues that can be pursued for future work. Since the proposed model assumes the linearity of
observations with respect to their latent vectors, it cannot find anomalies when different views are
in a nonlinear relationship. We can relax this assumption by using Gaussian processes [15]. We can
also relax the assumption that non-anomalous instances have the same latent vector across all views
by introducing private latent vectors [10]. The proposed model assumes Gaussian observation noise.
Our framework can be extended for binary or count data by using Bernoulli or Poisson distributions
instead of Gaussian.
Acknowledgments
MY was supported by KAKENHI 16K16114.
References
[1] E. Aleskerov, B. Freisleben, and B. Rao. Cardwatch: A neural network based database mining system for
credit card fraud detection. In Proceedings of the IEEE/IAFE Computational Intelligence for Financial
Engineering, pages 220?226, 1997.
8
[2] A. M. Alvarez, M. Yamada, A. Kimura, and T. Iwata. Clustering-based anomaly detection in multi-view
data. In Proceedings of ACM International Conference on Information and Knowledge Management,
CIKM, 2013.
[3] M.-L. Antonie, O. R. Zaiane, and A. Coman. Application of data mining techniques for medical image
classification. MDM/KDD, pages 94?101, 2001.
[4] C. Archambeau, N. Delannay, and M. Verleysen. Robust probabilistic projections. In Proceedings of the
23rd International Conference on Machine Learning, pages 33?40, 2006.
[5] F. R. Bach and M. I. Jordan. A probabilistic interpretation of canonical correlation analysis. Technical
Report 688, Department of Statistics, University of California, Berkeley, 2005.
[6] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys
(CSUR), 41(3):15, 2009.
[7] C. Chang and C. Lin. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent
Systems and Technology (TIST), 2(3):27, 2011.
[8] C. M. Christoudias, R. Urtasun, and T. Darrell. Multi-view learning in the presence of view disagreement.
In Proceedings of the 24th Conference on Unvertainty in Artificial Intelligence, UAI, 2008.
[9] K. Duh, C.-M. A. Yeung, T. Iwata, and M. Nagata. Managing information disparity in multilingual
document collections. ACM Transactions on Speech and Language Processing (TSLP), 10(1):1, 2013.
[10] C. H. Ek, J. Rihan, P. H. Torr, G. Rogez, and N. D. Lawrence. Ambiguity modeling in latent spaces. In
Machine Learning for Multimodal Interaction, pages 62?73. Springer, 2008.
[11] H. Fanaee-T and J. a. Gama. Tensor-based anomaly detection. Know.-Based Syst., 98(C):130?147, 2016.
[12] J. Gao, F. Liang, W. Fan, C. Wang, Y. Sun, and J. Han. On community outliers and their efficient
detection in information networks. In Proceedings of the 16th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining, pages 813?822. ACM, 2010.
[13] J. Gao, W. Fan, D. Turaga, S. Parthasarathy, and J. Han. A spectral framework for detecting inconsistency
across multi-source object relationships. In IEEE 11th International Conference on Data Mining (ICDM),
pages 1050?1055. IEEE, 2011.
[14] J. L. Herlocker, J. A. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for performing
collaborative filtering. In Proceedings of the 22nd Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, pages 230?237. ACM, 1999.
[15] N. D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data.
Advances in Neural Information Processing Systems, 16(3):329?336, 2004.
[16] A. Y. Liu and D. N. Lam. Using consensus clustering for multi-view anomaly detection. In 2012 IEEE
Symposium on Security and Privacy Workshops (SPW), pages 117?124. IEEE, 2012.
[17] L. Portnoy, E. Eskin, and S. Stolfo. Intrusion detection with unlabeled data using clustering. In Proceedings of ACM CSS Workshop on Data Mining Applied to Security, 2001.
[18] B. Sch?lkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of
a high-dimensional distribution. Neural computation, 13(7):1443?1471, 2001.
[19] J. Sethuraman. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[20] S. Shekhar, C.-T. Lu, and P. Zhang. Detecting graph-based spatial outliers. Intelligent Data Analysis, 6
(5):451?468, 2002.
[21] X. Song, M. Wu, C. Jermaine, and S. Ranka. Conditional anomaly detection. IEEE Transactions on
Knowledge and Data Engineering, 19(5):631?645, 2007.
[22] J. Sun, H. Qu, D. Chakrabarti, and C. Faloutsos. Neighborhood formation and anomaly detection in
bipartite graphs. In Proceedings of the 5th IEEE International Conference on Data Mining, pages 418?
425. IEEE, 2005.
[23] M. Tipping and C. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 61(3):611?622, 1999.
[24] X. Wang and I. Davidson. Discovering contexts and contextual outliers using random walks in graphs. In
Proceedings of the 9th IEEE International Conference on Data Mining, pages 1034?1039. IEEE, 2009.
9
| 6456 |@word private:7 nd:1 covariance:1 liu:2 series:2 score:13 disparity:3 tist:1 document:4 existing:4 current:2 wd:14 comparing:1 contextual:1 written:2 romance:2 bd:1 ranka:1 kdd:1 shape:1 enables:1 generative:3 selected:2 pursued:1 item:2 intelligence:2 discovering:1 yamada:3 eskin:1 detecting:4 node:1 org:1 zhang:1 five:1 become:1 symposium:1 chakrabarti:1 stolfo:1 privacy:1 manner:1 expected:1 behavior:4 multi:46 anthropologist:1 detects:1 spherical:1 automatically:2 becomes:1 spain:1 xx:2 linearity:1 cultural:1 maximizes:1 factorized:1 estimating:1 finding:3 nj:16 kimura:1 berkeley:1 every:5 rihan:1 stick:3 healthcare:1 platt:1 medical:1 engineering:2 switching:1 subscript:1 burn:1 shaded:1 co:1 archambeau:1 averaged:2 acknowledgment:1 procedure:2 cnj:9 area:1 projection:11 word:2 integrating:5 fraud:2 cannot:10 convenience:1 marginalize:1 close:1 unlabeled:1 collapsed:2 context:1 unshaded:1 demonstrated:1 missing:3 maximizing:4 regardless:1 independently:1 survey:2 sigir:1 splitting:2 spw:1 financial:1 updated:1 cs:1 controlling:2 suppose:1 user:8 anomaly:118 us:1 diabetes:2 located:1 xnd:11 database:2 labeled:1 observed:4 rmd:1 portnoy:1 wang:2 calculate:1 sun:2 decrease:1 highest:1 removed:1 ran:1 principled:1 pd:1 iafe:1 bipartite:1 multimodal:1 joint:6 various:1 represented:3 genre:7 heat:1 fast:1 describe:1 borchers:1 artificial:1 formation:1 choosing:1 neighborhood:1 insider:1 whose:1 relax:2 otherwise:1 statistic:1 gi:2 noisy:1 propose:3 lam:2 reconstruction:4 interaction:1 combining:1 mixing:1 christoudias:2 cluster:10 darrell:1 liked:1 object:1 depending:1 develop:1 fixing:1 indicate:1 stochastic:3 liar:4 require:1 generalization:1 clustered:2 preliminary:1 extension:1 credit:2 considered:1 normal:1 great:1 lawrence:2 mapping:1 algorithmic:1 smallest:1 wine:3 estimation:1 label:2 makoto:2 title:2 sensitive:2 gaussian:11 probabilistically:1 notational:1 properly:1 bernoulli:1 indicates:8 likelihood:9 kakenhi:1 intrusion:2 contrast:1 sigkdd:1 detect:2 glass:2 helpful:1 inference:7 dependent:1 znj:7 visualisation:1 classification:1 verleysen:1 development:1 spatial:1 equal:1 sampling:4 represents:7 foreground:1 purchase:3 future:1 others:2 contaminated:1 quantitatively:1 report:1 few:1 intelligent:2 randomly:2 gamma:4 simultaneously:1 vowel:2 detection:49 interest:1 investigate:2 mining:7 evaluation:1 uncommon:1 mixture:8 analyzed:2 swapping:2 taylor:1 logarithm:1 rotating:1 walk:1 instance:63 modeling:1 rao:1 zn:3 assignment:10 introducing:1 comprised:1 corrupted:2 synthetic:2 my:1 international:7 probabilistic:13 together:1 w1:1 ambiguity:1 management:3 containing:1 creating:1 ek:1 leading:1 syst:1 star:3 student:2 sec:2 chandola:1 explicitly:1 ad:1 vi:1 vehicle:2 performed:2 view:149 lab:1 try:1 analyze:1 h1:1 aggregation:1 nagata:1 collaborative:1 who:3 lkopf:1 bayesian:1 iterated:1 mc:2 lu:1 sn1:1 cc:4 confirmed:1 history:1 definition:1 naturally:1 sampled:1 ppca:4 color:1 knowledge:3 dimensionality:19 infers:1 cj:2 higher:4 tipping:1 methodology:1 alvarez:2 evaluated:1 tomoharu:2 marketing:1 smola:1 correlation:5 hand:7 horizontal:3 web:1 nonlinear:1 banerjee:1 contain:1 normalized:1 true:1 csur:1 analytically:4 assigned:3 laboratory:1 illustrated:1 auc:36 whereby:1 multiview:1 complete:1 image:3 mdm:1 recently:1 fi:1 wikipedia:1 multinomial:1 jp:1 extend:1 he:1 interpretation:1 belong:2 measurement:1 gibbs:4 tuning:1 rd:1 language:4 had:1 shawe:1 stable:2 han:2 posterior:1 belongs:1 binary:2 inconsistency:6 seen:2 mr:1 managing:1 period:1 ii:1 multiple:13 full:2 kyoto:1 infer:5 ntt:2 technical:1 cross:2 bach:1 lin:1 retrieval:1 icdm:1 controlled:1 anomalous:16 breast:2 poisson:1 yeung:1 iteration:3 kernel:1 represent:1 achieved:3 addition:1 background:2 source:2 malicious:1 sch:1 w2:1 inconsistent:10 effectiveness:1 jordan:1 presence:1 iii:1 trek:3 variety:2 iterate:1 avenue:1 whether:1 handled:1 url:1 assist:1 song:1 speech:1 jj:1 action:2 prefers:2 iterating:1 sn2:1 ph:1 generate:2 canonical:5 estimated:7 cikm:1 conform:1 discrete:1 hyperparameter:2 group:2 four:1 coman:1 threshold:2 libsvm:2 svmguide2:2 graph:3 letter:2 parameterized:1 wu:1 vn:1 draw:5 cca:4 fan:2 annual:1 constraint:2 ri:2 generates:1 pcca:14 kumar:1 performing:1 relatively:1 department:1 according:1 turaga:1 riedl:1 conjugate:1 across:16 beneficial:1 em:3 qu:1 b:1 intuitively:1 outlier:3 heart:2 count:1 needed:2 know:1 eight:1 appropriate:2 spectral:2 disagreement:1 faloutsos:1 professional:2 jn:14 inconsistently:1 assumes:6 dirichlet:6 denotes:1 top:1 clustering:10 graphical:2 include:1 saint:1 concatenated:1 society:1 purchased:2 contact:1 tensor:2 added:1 strategy:1 concentration:1 md:4 gradient:1 card:2 sci:1 duh:1 majority:1 considers:1 consensus:4 urtasun:1 analyst:1 assuming:1 modeled:2 relationship:3 index:1 minimizing:1 liang:1 sinica:1 potentially:1 ba:2 herlocker:1 unknown:1 perform:1 observation:16 extended:1 communication:1 excluding:1 community:1 inferred:2 rating:1 required:1 security:3 california:1 comedy:2 barcelona:1 nip:1 alternately:2 dth:4 thriller:1 including:1 reliable:1 video:1 royal:1 ocsvm:9 ranked:1 eh:1 predicting:1 indicator:1 nth:7 movie:12 rated:3 technology:1 library:1 sethuraman:1 created:2 parthasarathy:1 prior:7 discovery:1 relative:1 embedded:1 gama:1 filtering:2 validation:2 integrate:2 consistent:3 editor:1 cancer:2 supported:1 jth:3 wide:2 distributed:1 curve:1 calculated:7 xn:2 dimension:1 collection:1 far:2 employing:1 social:1 transaction:3 countably:1 multilingual:1 uai:1 shekhar:1 assumed:3 davidson:1 snd:9 latent:102 sonar:2 table:4 robust:6 hth:1 williamson:1 rogez:1 statistica:1 noise:9 alarm:1 hyperparameters:2 categorized:3 representative:1 roc:1 precision:9 jermaine:1 inferring:2 konstan:1 atypical:1 breaking:1 specific:2 bishop:1 list:1 svm:1 ionosphere:2 consist:1 workshop:2 false:1 effectively:1 texture:1 occurring:1 gao:3 visual:1 holland:1 chang:1 springer:1 corresponds:3 iwata:4 acm:9 conditional:1 identity:1 targeted:1 shared:7 change:2 included:1 infinite:2 specifically:1 determined:1 monty:2 averaging:1 movielens:1 typical:1 principal:2 torr:1 called:1 select:5 support:4 constructive:1 evaluate:1 audio:1 handling:1 |
6,032 | 6,457 | CMA-ES with Optimal Covariance Update and
Storage Complexity
Oswin Krause
Dept. of Computer Science
University of Copenhagen
Copenhagen, Denmark
oswin.krause@di.ku.dk
D?dac R. Arbon?s
Dept. of Computer Science
University of Copenhagen
Copenhagen, Denmark
didac@di.ku.dk
Christian Igel
Dept. of Computer Science
University of Copenhagen
Copenhagen, Denmark
igel@di.ku.dk
Abstract
The covariance matrix adaptation evolution strategy (CMA-ES) is arguably one
of the most powerful real-valued derivative-free optimization algorithms, finding
many applications in machine learning. The CMA-ES is a Monte Carlo method,
sampling from a sequence of multi-variate Gaussian distributions. Given the
function values at the sampled points, updating and storing the covariance matrix
dominates the time and space complexity in each iteration of the algorithm. We
propose a numerically stable quadratic-time covariance matrix update scheme
with minimal memory requirements based on maintaining triangular Cholesky
factors. This requires a modification of the cumulative step-size adaption (CSA)
mechanism in the CMA-ES, in which we replace the inverse of the square root of
the covariance matrix by the inverse of the triangular Cholesky factor. Because
the triangular Cholesky factor changes smoothly with the matrix square root, this
modification does not change the behavior of the CMA-ES in terms of required
objective function evaluations as verified empirically. Thus, the described algorithm
can and should replace the standard CMA-ES if updating and storing the covariance
matrix matters.
1
Introduction
The covariance matrix adaptation evolution strategy, CMA-ES [Hansen and Ostermeier, 2001], is
recognized as one of the most competitive derivative-free algorithms for real-valued optimization
[Beyer, 2007; Eiben and Smith, 2015]. The algorithm has been successfully applied in many unbiased
performance comparisons and numerous real-world applications. In machine learning, it is mainly
used for direct policy search in reinforcement learning and hyperparameter tuning in supervised
learning (e.g., see Gomez et al. [2008]; Heidrich-Meisner and Igel [2009a,b]; Igel [2010], and
references therein).
The CMA-ES is a Monte Carlo method for optimizing functions f : Rd ? R. The objective function
f does not need to be continuous and can be multi-modal, constrained, and disturbed by noise. In
each iteration, the CMA-ES samples from a d-dimensional multivariate normal distribution, the
search distribution, and ranks the sampled points according to their objective function values. The
mean and the covariance matrix of the search distribution are then adapted based on the ranked points.
Given the ranking of the sampled points, the runtime of one CMA-ES iteration is ?(d2 ) because
the square root of the covariance matrix is required, which is typically computed by an eigenvalue
decomposition. If the objective function can be evaluated efficiently and/or d is large, the computation
of the matrix square root can easily dominate the runtime of the optimization process.
Various strategies have been proposed to address this problem. The basic approach for reducing the
runtime is to perform an update of the matrix only every ? ? ?(d) steps [Hansen and Ostermeier,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1996, 2001], effectively reducing the time complexity to O(d2 ). However, this forces the algorithm
to use outdated matrices during most iterations and can increase the amount of function evaluations.
Furthermore, it leads to an uneven distribution of computation time over the iterations. Another
approach is to restrict the model complexity of the search distribution [Poland and Zell, 2001; Ros
and Hansen, 2008; Sun et al., 2013; Akimoto et al., 2014; Loshchilov, 2014, 2015], for example,
to consider only diagonal matrices [Ros and Hansen, 2008]. However, this can lead to a drastic
increase in function evaluations needed to approximate the optimum if the objective function is not
compatible with the restriction, for example, when optimizing highly non-separable problems while
only adapting the diagonal of the covariance matrix [Omidvar and Li, 2011]. More recently, methods
were proposed that update the Cholesky factor of the covariance matrix instead of the covariance
matrix itself [Suttorp et al., 2009; Krause and Igel, 2015]. This works well for some CMA-ES
variations (e.g., the (1+1)-CMA-ES and the multi-objective MO-CMA-ES [Suttorp et al., 2009;
Krause and Igel, 2015; Bringmann et al., 2013]), however, the original CMA-ES relies on the matrix
square root, which cannot be replaced one-to-one by a Cholesky factor.
In the following, we explore the use of the triangular Cholesky factorization instead of the square root
in the standard CMA-ES. In contrast to previous attempts in this direction, we present an approach
that comes with a theoretical justification for why it does not deteriorate the algorithm?s performance.
This approach leads to the optimal asymptotic storage and runtime complexity when adaptation of
the full covariance matrix is required, as is the case for non-separable ill-conditioned problems. Our
CMA-ES variant, referred to as Cholesky-CMA-ES, reduces the runtime complexity of the algorithm
with no significant change in the number of objective function evaluations. It also reduces the memory
footprint of the algorithm.
Section 2 briefly describes the original CMA-ES algorithm (for details we refer Hansen [2015]).
In section 3 we propose our new method for approximating the step-size adaptation. We give a
theoretical justification for the convergence of the new algorithm. We provide empirical performance
results comparing the original CMA-ES with the new Cholesky-CMA-ES using various benchmark
functions in section 4. Finally, we discuss our results and draw our conclusions.
2
Background
Before we briefly describe the CMA-ES to fix our notation, we discuss some basic properties of
using a Cholesky decomposition to sample from a multi-variate Gaussian distribution. Sampling
from a d-dimensional multi-variate normal distribution N (m, ?), m ? Rd ,? ? Rd?d is usually
done using a decomposition of the covariance matrix ?. This could be the square root of the matrix
? = HH ? Rd?d or a lower triangular Cholesky factorization ? = AAT , which is related to the
square root by the QR-decomposition H = AE where E is an orthogonal matrix. We can sample a
point x from N (m, ?) using a sample z ? N (0, I) by x = Hz + m = AEz + m = Ay + m,
where we set y = Ez. We have y ? N (0, I) since E is orthogonal. Thus, as long as we are only
interested in the value of x and do not need y, we can sample using the Cholesky factor instead of
the matrix square root.
2.1
CMA-ES
The CMA-ES has been proposed by Hansen and Ostermeier [1996, 2001] and its most recent version
is described by Hansen [2015]. In the tth iteration of the algorithm, the CMA-ES samples ? points
from a multivariate normal distribution N (mt , ?t2 ? Ct ), evaluates the objective function f at these
points, and adapts the parameters Ct ? Rd?d , mt ? Rd , and ?t ? R+ . In the following, we present
the update procedure in a slightly simplified form (for didactic reasons, we refer to Hansen [2015] for
the details). All parameters (?, ?, ?, c? , d? , cc , c1 , c? ) are set to their default values [Hansen, 2015,
Table 1].
For a minimization task, the ? points are ranked by function value such that f (xP
1,t ) ? f (x2,t ) ?
?
? ? ? ? f (x?,t ). The distribution mean is set to the weighted average mt+1 = i=1 ?i xi,t . The
weights depend only on the ranking, not on the function values directly. This renders the algorithm
invariant under order-preserving transformation of the objective function. Points with smaller ranks
P?
(i.e., better objective function values) are given a larger weight ?i with i=1 ?i = 1. The weights
are zero for ranks larger than ? < ?, which is typically ? = ?/2. Thus, points with function values
worse than the median do not enter the adaptation process of the parameters. The covariance matrix
2
is updated using two terms, a rank-1 and a rank-? update. For the rank-1 update, a long term average
of the changes of mt is maintained
p
mt+1 ? mt
pc,t+1 = (1 ? cc )pc,t + cc (2 ? cc )?eff
,
(1)
?t
P?
where ?eff = 1/ i=1 ?i2 is the effective sample size given the weights. Note that pc,t is large
when the algorithm performs steps in the same direction, while it becomes small when the algorithm
performs steps in alternating directions.1 The rank-? update estimates the covariance of the weighted
steps xi,t ? mt , 1 ? i ? ?. Combining rank-1 and rank-? update gives the final update rule for Ct ,
which can be motivated by principles from information geometry [Akimoto et al., 2012]:
Ct+1 = (1 ? c1 ? c? )Ct +
c1 pc,t+1 pTc,t+1
?
c? X
T
+ 2
?i (xi,t ? mt ) (xi,t ? mt )
?t i=1
(2)
So far, the update is (apart from initialization) invariant under affine linear transformations (i.e.,
x 7? Bx + b, B ? GL(d, R)).
The update of the global step-size parameter ?t is based on the cumulative step-size adaptation
algorithm (CSA). It measures the correlation of successive steps in a normalized coordinate system.
The goal is to adapt ?t such that the steps of the algorithm become uncorrelated. Under the assumption
that uncorrelated steps are standard normally distributed, a carefully designed long term average over
the steps should have the same expected length as a ?-distributed random variable, denoted by E{?}.
The long term average has the form
p
?1/2 mt+1 ? mt
p?,t+1 = (1 ? c? )p?,t + c? (2 ? c? )?eff Ct
(3)
?t
?1/2
with p?,1 = 0. The normalization by the factor Ct
is the main difference between equations
(1) and (3). It is important because it corrects for a change of Ct between iterations. Without this
correction, it is difficult to measure correlations accurately in the un-normalized coordinate system.
For the update, the length of p?,t+1 is compared to the expected length E{?} and ?t is changed
depending on whether the average step taken is longer or shorter than expected:
c? kp?,t+1 k
?t+1 = ?t exp
?1
(4)
d?
E{?}
This update is not proven to preserve invariance under affine linear transformations [Auger, 2015],
and it is it conjectured that it does not.
3
Cholesky-CMA-ES
In general, computing the matrix square root or the Cholesky factor from an n ? n matrix has time
complexity ?(d2 ) (i.e., scales worse than quadratically). To reduce this complexity, Suttorp et al.
[2009] have suggested to replace the process of updating the covariance matrix and decomposing it
afterwards by updates directly operating on the decomposition (i.e., the covariance matrix is never
computed and stored explicitly, only its factorization is maintained). Krause and Igel [2015] have
shown that the update of Ct in equation (2) can be rewritten as a quadratic-time update of its triangular
Cholesky factor At with Ct = At ATt . They consider the special case ? = ? = 1. We propose
to extend this update to the standard CMA-ES, which leads to a runtime O(?d2 ). As typically
? = O(log(d)), this gives a large speed-up compared to the explicit recomputation of the Cholesky
factor or the inverse of the covariance matrix.
Unfortunately, the fast Cholesky update can not be applied directly to the original CMA-ES. To see
?1/2
this, consider the term st = Ct
(mt+1 ? mt ) in equation (3). Rewriting p?,t+1 in terms of st in
a non-recursive fashion, we obtain
p?,t+1 =
p
c? (2 ? c? )?eff
t
X
(1 ? c? )t?k
k=1
?k
sk .
1
Given cc , the factors in (1) are chosen to compensate
? for the change in variance when adding distributions.
If the ranking of the points would be purely random, ?eff ? (mt+1 ? mt )/?t ? N (0, Ct ) and if Ct = I and
pc,t ? N (0, I) then pc,t+1 ? N (0, I).
3
Algorithm 1: The Cholesky-CMA-ES.
input :?, ?, m1 , ?i=1...? , c? , d? , cc , c1 and c?
A1 = I, pc,1 = 0, p?,1 = 0
for t = 1, 2, . . . do
for i = 1, . . . , ? do
xi,t = ?t At y i,t + mt , y i,t ? N (0, I)
Sort xi,t ,P
i = 1, . . . , ? increasing by f (xi,t )
?
mt+1 = i=1 ?i xi,t
p
pc,t+1 = (1 ? cc )pc,t + cc (2 ? cc )?eff mt+1?t?mt
// Apply formula (2) to At
p
At+1 ? 1 ? c1 ? c? At
At+1 ? rankOneUpdate(At+1 , c1 , pc,t+1 )
for i = 1, . . . , ? do
x ?m
At+1 ? rankOneUpdate(At+1 , c? ?i , i,t?t t )
?k as in (5)
// Update ? using s
p
mt+1 ?mt
p?,t+1 = (1 ? c? )p?,t + c? (2 ? c? )?eff A?1
t
?t
kp?,t+1 k
?t+1 = ?t exp dc??
?
1
E{?}
Algorithm 2: rankOneUpdate(A, ?, v)
input :Cholesky factor A ? Rd?d of C, ? ? R, v ? Rd
output : Cholesky factor A0 of C + ?vv T
??v
b?1
for j = 1, .q
. . , d do
A0jj ?
A2jj + ?b ?j2
? ? A2jj b + ??j2
for k = j + 1, . . . , d do
?
?k ? ?k ? Ajjj Akj
A0kj =
A0jj
Ajj Akj
+
A0jj ??j
?k
?
?2
b ? b + ? A2j
jj
1/2
By the RQ-decomposition, we can find Ct = At Et with Et being an orthogonal matrix and At
lower triangular. When replacing st by s?t = A?1
t (mt+1 ? mt ), we obtain
p?,t+1 =
p
c? (2 ? c? )?eff
t
X
(1 ? c? )t?k
k=1
?k
EkT s?k .
?1/2
T
Thus, replacing Ct
by A?1
t introduces a new random rotation matrix Et , which changes in every
iteration. Obtaining Et from At can be achieved by the polar-decomposition, which is a cubic-time
operation: currently there are no algorithms known that can update an existing polar decomposition
from an updated Cholesky factor in less than cubic time. Thus, if our goal is to apply the fast Cholesky
update, we have to perform the update without this correction factor
p?,t+1 ?
p
c? (2 ? c? )?eff
t
X
(1 ? c? )t?k
k=1
?k
s?k .
(5)
This introduces some error, but we will show in the following that we can expect this error to be small
and to decrease over time as the algorithm converges to the optimum. For this, we need the following
result:
4
?
Lemma 1. Consider the sequence of symmetric positive definite matrices C?t=0
with C?t =
t??
Ct (det Ct )?1/d . Assume that C?t ?? C? and that C? is symmetric positive definite with det C? = 1.
1/2
1/2
Let C?t = A?t Et denote the RQ-decomposition of C?t , where Et is orthogonal and A?t lower
t??
T
triangular. Then it holds Et?1
Et ?? I.
? the RQ-decomposition of C.
? As det C? 6= 0, this decomposition is unique.
Proof. Let C? = AE,
Because the RQ-decomposition is continuous, it maps convergent sequences to convergent sequences.
t??
t??
T
Therefore Et ?? E and thus, Et?1
Et ?? E T E = I.
This result establishes that, when Ct converges to a certain shape (but not necessary to a certain
scaling), At and thus Et will also converge (up to scaling). Thus, as we only need the norm of p?,t+1 ,
we can rotate the coordinate system and by multiplying with Et we obtain
t
X (1 ? c )t?k
p
?
T
(6)
kp?,t+1 k = kEt p?,t+1 k = c? (2 ? c? )?eff
Et Ek s?k
.
?k
k=1
t??
T
Therefore, if Et Et?1
?? I, the error in the norm will also vanish due to the exponential weighting
in the summation. Note that this does not hold for any decomposition Ct = Bt BtT . If we do not
constrain Bt to be triangular and allow any matrix, we do not have a bijective mapping between Ct
and Bt anymore and the introduction of d(d?1)
degrees of freedom (as, e.g., in the update proposed
2
by Suttorp et al. [2009]) allows the creation of non-converging sequences of Et even for Ct = const.
As the CMA-ES is a randomized algorithm, we cannot assume convergence of Ct . However, in
simplified algorithms the expectation of Ct converges [Beyer, 2014]. Still, the reasoning behind
?t is small if Ct changes slowly.
Lemma 1 establishes that the error caused by replacing st by s
Equation (6) establishes that the error depends only on the rotation of coordinate systems. As the
mapping from Ct to the triangular factor At is one-to-one and smooth, the coordinate system changes
in every step will be small ? and because of the exponentially decaying weighting, only the last few
coordinate systems matter at a particular time step t.
The Cholesky-CMA-ES algorithm is given in Algorithm 1. One can derive the algorithm from the
standard CMA-ES by decomposing (2) into a number of rank-1 updates Ct+1 = (((?Ct + ?1 v 1 v T1 ) +
?2 v 2 v T2 ) . . . ) and applying them to the Cholesky factor using Algorithm 2.
Properties of the update rule. The O(?d2 ) complexity of the update in the Cholesky-CMAES is asymptotically optimal.2 Apart from the theoretical guarantees, there are several additional
advantages compared to approaches using a non-triangular Cholesky factorization (e.g., Suttorp et
al. [2009]). First, as only triangular matrices have to be stored, the storage complexity is optimal.
Second, the diagonal elements of a triangular Cholesky factor are the square roots of the eigenvalues
of the factorized matrix, that is, we get the eigenvalues of the covariance matrix for free. These
are important, for example, for monitoring the conditioning of the optimization problem and, in
particular, to enforce lower bounds on the variances of ?t Ct projected on its principal components.
Third, a triangular matrix can be inverted in quadratic time. Thus, we can efficiently compute A?1
t
from At when needed, instead of having two separate quadratic-time updates for A?1
t and At , which
requires more memory and is prone to numerical instabilities.
4
Experiments and Results
Experiments. We compared the Cholesky-CMA-ES with other CMA-ES variants.3 The reference
CMA-ES
uses a delay strategy in which the matrix square root is computed every
n implementation
o
1
max 1, 10d(c1 +c? ) iterations [Hansen, 2015], which equals one for the dimensions considered
2
Actually, the complexity is related to the complexity of multiplying two ? ? d matrices. We assume a na?ve
implementation of matrix multiplication. With a faster multiplication algorithm, the complexity can be reduced
accordingly.
3
We added our algorithm to the open-source machine learning library Shark [Igel et al., 2008] and used
LAPACK for high efficiency.
5
Iterations
104
103
102
4
104
Iterations
104
104
103
103
Cholesky-CMA-ES
Suttorp-CMA-ES
CMA-ES/d
CMA-ES-Ref
32
(a) Sphere
256
4
104
103
102
102
32
(b) Cigar
256
32
d
(d) Ellipsoid
256
102
4
32
(c) Discus
256
4
32
d
(f) DiffPowers
256
104
103
4
102
103
4
32
d
(e) Rosenbrock
256
102
Figure 1: Function evaluations required to reach f (x) < 10?14 over problem dimensionality
(medians of 100 trials). The graphs for CMA-ES-Ref and Cholesky-CMA-ES overlap.
103
103
103
1
1
time/s
Cholesky-CMA-ES
Suttorp-CMA-ES
CMA-ES/d
CMA-ES-Ref
1
10?3
4
time/s
103
32
(a) Sphere
256
4
103
1
10?3
10?3
32
(b) Cigar
256
32
d
(d) Ellipsoid
256
10?3
4
32
(c) Discus
256
4
32
d
(f) DiffPowers
256
103
1
4
10?3
1
4
32
d
(e) Rosenbrock
256
10?3
Figure 2: Runtime in seconds over problem dimensionality. Shown are medians of 100 trials. Note
the logarithmic scaling on both axes.
6
Name
Sphere
Rosenbrock
Discus
Cigar
Ellipsoid
Different Powers
f (x)
2
kxk
Pd?1
2 2
2
i=0P100(xi+1 ? xi ) + (1 ? xi )
d
x20 + i=1 10?6 x2i
Pd
?6 2
10 x0 + i=1 x2i
Pd
?6i
d?1 x2
i
i=0 10
2+10i
Pd
d?1
|x
|
i
i=0
Table 1: Benchmark functions used in the experiments (additionally, a rotation matrix B transforms
the variables, x 7? Bx)
log f (mt )
102
102
Cholesky-CMA-ES
Suttorp-CMA-ES
CMA-ES/d
CMA-ES-Ref
10?6
10?14
0
10?6
10
20
10?14
0
(a) Sphere
log f (mt )
10
10
10?6
0
2
10?6
200
400
10?14
0
log f (mt )
(c) Discus
102
10?6
10?6
0
200
time/s
200
400
(d) DiffPowers
102
10?14
60
(b) Cigar
2
10?14
30
400
10?14
(e) Ellipsoid
0
200
time/s
400
(f) Rosenbrock
Figure 3: Function value evolution over time on the benchmark functions with d = 128. Shown are
single runs, namely those with runtimes closest to the corresponding median runtimes.
7
in our experiments. We call this variant CMA-ES-Ref. As an alternative, we experimented with
delaying the update for d steps. We refer to this variant as CMA-ES/d. We also adapted the nontriangular Cholesky factor approach by Suttorp et al. [2009] to the state-of-the art implementation of
the CMA-ES. We refer to the resulting algorithm as Suttorp-CMA-ES.
We considered standard benchmark functions for derivative-free optimization given in Table 1. Sphere
is considered to show that on a spherical function the step size adaption does not behave differently;
Cigar/Discus/Ellipsoid model functions with different convex shapes near the optimum; Rosenbrock
tests learning a function with d ? 1 bends, which lead to slowly converging covariance matrices in
the optimization process; Diffpowers is an example of a function with arbitrarily bad conditioning.
To test rotation invariance, we applied a rotation matrix to the variables, x 7? Bx, B ? SO(d, R).
This is done for every benchmark function, and a rotation matrix was chosen randomly at the
beginning of each trial. All starting points were drawn uniformly from [0, 1], except for Sphere,
where we sampled from N (0, I). For each function, we vary d ? {4, 8, 16, . . . , 256}. Due to the long
running times, we only compute CMA-ES-Ref up to d = 128. For the given range of dimensions,
for every choice of d, we ran 100 trials from different initial points and monitored the number of
iterations and the wall-clock time needed to sample a point with a function value below 10?14 . For
Rosenbrock we excluded the trials in which the algorithm did not converge to the global optimum.
We further evaluated the algorithms on additional benchmark functions inspired by Stich and M?ller
[2012] and measured the change of rotation introduced by the Cholesky-CMA-ES at each iteration
(Et ), see supplementary material.
Results. Figure 1 shows that CMA-ES-Ref and Cholesky-CMA-ES required the same amount
of function evaluations to reach a given objective value. The CMA-ES/d required slightly more
evaluations depending on the benchmark function. When considering the wall-clock runtime, the
Cholesky-CMA-ES was significantly faster than the other algorithms. As expected from the theoretical analysis, the higher the dimensionality the more pronounced the differences, see Figure 2
(note logarithmic scales). For d = 64 the Cholesky-CMA-ES was already 20 times faster than the
CMA-ES-Ref. The drastic differences in runtime become apparent when inspecting single trials.
Note that for d = 256 the matrix size exceeded the L2 cache, which affected the performance of
the Cholesky-CMA-ES and Suttorp-CMA-ES. Figure 3 plots the trials with runtimes closest to the
corresponding median runtimes for d = 128.
5
Conclusion
CMA-ES is a ubiquitous algorithm for derivative-free optimization. The CMA-ES has proven to be a
highly efficient direct policy search algorithm and to be a useful tool for model selection in supervised
learning. We propose the Cholesky-CMA-ES, which can be regarded as an approximation of the
original CMA-ES. We gave theoretical arguments for why our approximation, which only affects the
global step-size adaptation, does not impair performance. The Cholesky-CMA-ES achieves a better,
asymptotically optimal time complexity of O(?d2 ) for the covariance update and optimal memory
complexity. It allows for numerically stable computation of the inverse of the Cholesky factor in
quadratic time and provides the eigenvalues of the covariance matrix without additional costs. We
empirically compared the Cholesky-CMA-ES to the state-of-the-art CMA-ES with delayed covariance
matrix decomposition. Our experiments demonstrated a significant increase in optimizaton speed. As
expected, the Cholesky-CMA-ES needed the same amount of objective function evaluations as the
standard CMA-ES, but required much less wall-clock time ? and this speed-up increases with the
search space dimensionality. Still, our algorithm scales quadratically with the problem dimensionality.
If the dimensionality gets so large that maintaining a full covariance matrix becomes computationally
infeasible, one has to resort to low-dimensional approximations [e.g., Loshchilov, 2015], which,
however, bear the risk of a significant drop in optimization performance. Thus, we advocate our new
Cholesky-CMA-ES for scaling up CMA-ES to large optimization problems for which updating and
storing the covariance matrix is still possible, for example, for training neural networks in direct
policy search.
Acknowledgement. We acknowledge support from the Innovation Fund Denmark through the
projects ?Personalized breast cancer screening? (OK, CI) and ?Cyber Fraud Detection Using Advanced Machine Learning Techniques? (DRA, CI).
8
References
Y. Akimoto, Y. Nagata, I. Ono, and S. Kobayashi. Theoretical foundation for CMA-ES from
information geometry perspective. Algorithmica, 64(4):698?716, 2012.
Y. Akimoto, A. Auger, and N. Hansen. Comparison-based natural gradient optimization in high
dimension. In Proceedings of the 16th Annual Genetic and Evolutionary Computation Conference
(GECCO), pages 373?380. ACM, 2014.
A. Auger. Analysis of Comparison-based Stochastic Continous Black-Box Optimization Algorithms.
Habilitation thesis, Facult? des Sciences d?Orsay, Universit? Paris-Sud, 2015.
H.-G. Beyer. Evolution strategies. Scholarpedia, 2(8):1965, 2007.
H.-G. Beyer. Convergence analysis of evolutionary algorithms that are based on the paradigm of
information geometry. Evolutionary Computation, 22(4):679?709, 2014.
K. Bringmann, T. Friedrich, C. Igel, and T. Vo?. Speeding up many-objective optimization by Monte
Carlo approximations. Artificial Intelligence, 204:22?29, 2013.
A. E. Eiben and Jim Smith. From evolutionary computation to the evolution of things. Nature,
521:476?482, 2015.
F. Gomez, J. Schmidhuber, and R. Miikkulainen. Accelerated neural evolution through cooperatively
coevolved synapses. Journal of Machine Learning Research, 9:937?965, 2008.
N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of IEEE International Conference on
Evolutionary Computation (CEC 1996), pages 312?317. IEEE, 1996.
N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies.
Evolutionary Computation, 9(2):159?195, 2001.
N. Hansen. The CMA evolution strategy: A tutorial. Technical report, Inria Saclay ? ?le-de-France,
Universit? Paris-Sud, LRI, 2015.
V. Heidrich-Meisner and C. Igel. Hoeffding and Bernstein races for selecting policies in evolutionary
direct policy search. In Proceedings of the 26th International Conference on Machine Learning
(ICML 2009), pages 401?408, 2009.
V. Heidrich-Meisner and C. Igel. Neuroevolution strategies for episodic reinforcement learning.
Journal of Algorithms, 64(4):152?168, 2009.
C. Igel, T. Glasmachers, and V. Heidrich-Meisner. Shark. Journal of Machine Learning Research,
9:993?996, 2008.
C. Igel. Evolutionary kernel learning. In Encyclopedia of Machine Learning. Springer-Verlag, 2010.
O. Krause and C. Igel. A more efficient rank-one covariance matrix update for evolution strategies.
In Proceedings of the 2015 ACM Conference on Foundations of Genetic Algorithms (FOGA XIII),
pages 129?136. ACM, 2015.
I. Loshchilov. A computationally efficient limited memory CMA-ES for large scale optimization. In
Proceedings of the 16th Annual Genetic and Evolutionary Computation Conference (GECCO),
pages 397?404. ACM, 2014.
I. Loshchilov. LM-CMA: An alternative to L-BFGS for large scale black-box optimization. Evolutionary Computation, 2015.
M. N. Omidvar and X. Li. A comparative study of CMA-ES on large scale global optimisation. In AI
2010: Advances in Artificial Intelligence, volume 6464 of LNAI, pages 303?312. Springer, 2011.
J. Poland and A. Zell. Main vector adaptation: A CMA variant with linear time and space complexity.
In Proceedings of the 10th Annual Genetic and Evolutionary Computation Conference (GECCO),
pages 1050?1055. Morgan Kaufmann Publishers, 2001.
R. Ros and N. Hansen. A simple modification in CMA-ES achieving linear time and space complexity.
In Parallel Problem Solving from Nature (PPSN X), pages 296?305. Springer, 2008.
S. U. Stich and C. L. M?ller. On spectral invariance of randomized Hessian and covariance matrix
adaptation schemes. In Parallel Problem Solving from Nature (PPSN XII), pages 448?457. Springer,
2012.
Y. Sun, T. Schaul, F. Gomez, and J. Schmidhuber. A linear time natural evolution strategy for
non-separable functions. In 15th Annual Conference on Genetic and Evolutionary Computation
Conference Companion, pages 61?62. ACM, 2013.
T. Suttorp, N. Hansen, and C. Igel. Efficient covariance matrix update for variable metric evolution
strategies. Machine Learning, 75(2):167?197, 2009.
9
| 6457 |@word trial:7 version:1 briefly:2 norm:2 open:1 d2:6 ajj:1 covariance:30 decomposition:14 initial:1 omidvar:2 att:1 selecting:1 genetic:5 existing:1 comparing:1 numerical:1 shape:2 christian:1 designed:1 plot:1 update:32 drop:1 fund:1 intelligence:2 accordingly:1 beginning:1 smith:2 rosenbrock:6 provides:1 successive:1 direct:4 become:2 a2j:1 advocate:1 x0:1 deteriorate:1 expected:5 behavior:1 multi:5 sud:2 zell:2 inspired:1 spherical:1 cache:1 considering:1 increasing:1 becomes:2 spain:1 project:1 notation:1 factorized:1 finding:1 transformation:3 guarantee:1 every:6 runtime:9 ro:3 universit:2 normally:1 arguably:1 before:1 positive:2 aat:1 t1:1 kobayashi:1 black:2 inria:1 therein:1 initialization:1 factorization:4 limited:1 range:1 igel:15 unique:1 recursive:1 definite:2 footprint:1 procedure:1 episodic:1 empirical:1 adapting:2 significantly:1 fraud:1 get:2 cannot:2 selection:1 bend:1 storage:3 risk:1 applying:1 instability:1 disturbed:1 restriction:1 map:1 demonstrated:1 starting:1 convex:1 rule:2 regarded:1 dominate:1 variation:1 justification:2 coordinate:6 updated:2 cmaes:1 us:1 element:1 updating:4 sun:2 decrease:1 ran:1 rq:4 pd:4 complexity:17 depend:1 solving:2 creation:1 purely:1 akimoto:4 efficiency:1 completely:1 easily:1 differently:1 various:2 fast:2 describe:1 effective:1 monte:3 kp:3 artificial:2 aez:1 apparent:1 larger:2 valued:2 supplementary:1 triangular:14 cma:80 itself:1 final:1 sequence:5 eigenvalue:4 advantage:1 propose:4 adaptation:11 j2:2 combining:1 adapts:1 schaul:1 pronounced:1 qr:1 ostermeier:5 convergence:3 requirement:1 optimum:4 comparative:1 converges:3 depending:2 derive:1 measured:1 come:1 direction:3 stochastic:1 eff:10 material:1 glasmachers:1 fix:1 wall:3 summation:1 inspecting:1 cooperatively:1 correction:2 hold:2 considered:3 normal:4 exp:2 mapping:2 mo:1 lm:1 vary:1 achieves:1 polar:2 currently:1 hansen:16 successfully:1 establishes:3 weighted:2 tool:1 minimization:1 gaussian:2 beyer:4 ax:1 rank:11 mainly:1 contrast:1 lri:1 typically:3 bt:3 a0:1 habilitation:1 lnai:1 france:1 interested:1 lapack:1 ill:1 denoted:1 constrained:1 special:1 art:2 equal:1 never:1 having:1 sampling:2 runtimes:4 icml:1 btt:1 t2:2 report:1 xiii:1 few:1 randomly:1 preserve:1 ve:1 delayed:1 replaced:1 geometry:3 algorithmica:1 attempt:1 freedom:1 detection:1 screening:1 highly:2 evaluation:8 introduces:2 derandomized:1 pc:10 behind:1 necessary:1 shorter:1 orthogonal:4 theoretical:6 minimal:1 cost:1 delay:1 stored:2 st:4 international:2 randomized:2 akj:2 corrects:1 na:1 recomputation:1 thesis:1 slowly:2 hoeffding:1 ket:1 worse:2 dra:1 ek:1 derivative:4 resort:1 bx:3 li:2 de:2 bfgs:1 matter:2 explicitly:1 ranking:3 caused:1 depends:1 scholarpedia:1 race:1 root:12 competitive:1 sort:1 decaying:1 nagata:1 parallel:2 mutation:1 square:12 variance:2 ekt:1 efficiently:2 kaufmann:1 accurately:1 carlo:3 multiplying:2 monitoring:1 cc:9 synapsis:1 reach:2 evaluates:1 proof:1 di:3 monitored:1 sampled:4 dimensionality:6 ubiquitous:1 carefully:1 actually:1 exceeded:1 ok:1 higher:1 supervised:2 modal:1 evaluated:2 done:2 box:2 furthermore:1 correlation:2 clock:3 dac:1 replacing:3 name:1 normalized:2 unbiased:1 evolution:12 alternating:1 symmetric:2 excluded:1 i2:1 during:1 self:1 maintained:2 bijective:1 ay:1 vo:1 performs:2 reasoning:1 recently:1 rotation:7 mt:26 empirically:2 conditioning:2 exponentially:1 volume:1 extend:1 m1:1 numerically:2 significant:3 refer:4 enter:1 ai:1 tuning:1 rd:8 stable:2 longer:1 operating:1 heidrich:4 multivariate:2 closest:2 recent:1 perspective:1 optimizing:2 conjectured:1 apart:2 schmidhuber:2 certain:2 verlag:1 arbitrarily:1 inverted:1 preserving:1 morgan:1 additional:3 recognized:1 converge:2 paradigm:1 ller:2 full:2 afterwards:1 reduces:2 smooth:1 technical:1 faster:3 adapt:1 long:5 compensate:1 sphere:6 a1:1 converging:2 variant:5 basic:2 ae:2 breast:1 expectation:1 optimisation:1 metric:1 iteration:13 normalization:1 kernel:1 achieved:1 c1:7 background:1 krause:6 median:5 source:1 publisher:1 hz:1 cyber:1 thing:1 foga:1 call:1 orsay:1 near:1 bernstein:1 affect:1 variate:3 gave:1 restrict:1 reduce:1 det:3 whether:1 motivated:1 render:1 hessian:1 jj:1 useful:1 amount:3 transforms:1 encyclopedia:1 tth:1 reduced:1 bringmann:2 tutorial:1 xii:1 hyperparameter:1 didactic:1 affected:1 ptc:1 drawn:1 achieving:1 rewriting:1 verified:1 asymptotically:2 graph:1 run:1 inverse:4 powerful:1 shark:2 draw:1 scaling:4 bound:1 ct:28 outdated:1 gomez:3 convergent:2 quadratic:5 annual:4 adapted:2 constrain:1 x2:2 personalized:1 speed:3 argument:1 separable:3 according:1 describes:1 slightly:2 smaller:1 modification:3 invariant:2 taken:1 computationally:2 equation:4 auger:3 discus:7 mechanism:1 hh:1 needed:4 neuroevolution:1 drastic:2 decomposing:2 rewritten:1 operation:1 apply:2 enforce:1 spectral:1 anymore:1 alternative:2 original:5 running:1 maintaining:2 const:1 approximating:1 objective:13 added:1 already:1 strategy:12 diagonal:3 evolutionary:12 gradient:1 separate:1 gecco:3 reason:1 denmark:4 length:3 ellipsoid:5 innovation:1 difficult:1 unfortunately:1 x20:1 implementation:3 policy:5 perform:2 benchmark:7 acknowledge:1 behave:1 delaying:1 jim:1 dc:1 arbitrary:1 introduced:1 copenhagen:6 required:7 namely:1 paris:2 continous:1 friedrich:1 quadratically:2 barcelona:1 nip:1 address:1 impair:1 suggested:1 usually:1 below:1 saclay:1 max:1 memory:5 power:1 overlap:1 ranked:2 force:1 natural:2 advanced:1 scheme:2 x2i:2 library:1 numerous:1 speeding:1 poland:2 l2:1 acknowledgement:1 multiplication:2 asymptotic:1 expect:1 bear:1 proven:2 foundation:2 degree:1 affine:2 xp:1 principle:1 storing:3 uncorrelated:2 cancer:1 loshchilov:4 compatible:1 changed:1 gl:1 last:1 free:5 prone:1 infeasible:1 allow:1 vv:1 optimizaton:1 distributed:2 default:1 dimension:3 world:1 cumulative:2 reinforcement:2 projected:1 simplified:2 far:1 miikkulainen:1 approximate:1 global:4 xi:11 search:8 continuous:2 un:1 facult:1 sk:1 why:2 table:3 additionally:1 nature:3 ku:3 p100:1 obtaining:1 csa:2 cigar:5 ppsn:2 did:1 main:2 noise:1 ref:8 referred:1 fashion:1 cubic:2 explicit:1 exponential:1 meisner:4 vanish:1 weighting:2 third:1 formula:1 companion:1 bad:1 ono:1 cec:1 dk:3 experimented:1 dominates:1 adding:1 effectively:1 ci:2 conditioned:1 smoothly:1 logarithmic:2 explore:1 ez:1 kxk:1 springer:4 adaption:2 relies:1 acm:5 goal:2 replace:3 change:10 stich:2 except:1 reducing:2 uniformly:1 lemma:2 principal:1 invariance:3 e:77 uneven:1 cholesky:43 support:1 rotate:1 accelerated:1 dept:3 |
6,033 | 6,458 | Large Margin Discriminant Dimensionality
Reduction in Prediction Space
Mohammad Saberian
Netflix
esaberian@netflix.com
Can Xu
Google
canxu@google.com
Jose Costa Pereira
INESCTEC
jose.c.pereira@inesctec.pt
Jian Yang
Yahoo Research
jianyang@yahoo-inc.com
Nuno Vasconcelos
UC San Diego
nvasconcelos@ucsd.edu
Abstract
In this paper we establish a duality between boosting and SVM, and use this
to derive a novel discriminant dimensionality reduction algorithm. In particular,
using the multiclass formulation of boosting and SVM we note that both use
a combination of mapping and linear classification to maximize the multiclass
margin. In SVM this is implemented using a pre-defined mapping (induced by
the kernel) and optimizing the linear classifiers. In boosting the linear classifiers
are pre-defined and the mapping (predictor) is learned through a combination of
weak learners. We argue that the intermediate mapping, i.e. boosting predictor, is
preserving the discriminant aspects of the data and that by controlling the dimension
of this mapping it is possible to obtain discriminant low dimensional representations
for the data. We use the aforementioned duality and propose a new method, Large
Margin Discriminant Dimensionality Reduction (LADDER) that jointly learns the
mapping and the linear classifiers in an efficient manner. This leads to a data-driven
mapping which can embed data into any number of dimensions. Experimental
results show that this embedding can significantly improve performance on tasks
such as hashing and image/scene classification.
1
Introduction
Boosting and support vector machines (SVM) are widely popular techniques for learning classifiers.
While both methods are maximizing the margin, there are a number of differences that distinguish
them; e.g. while SVM selects a number of examples to assemble the decision boundary, boosting
achieves this by combining a set of weak learners. In this work we propose a new duality between
boosting and SVM which follows from their multiclass formulations. It shows that both methods
seek a linear decision rule by maximizing the margin after transforming input data to an intermediate
space. In particular, kernel-SVM (K-SVM) [39] first selects a transformation (induced by the kernel)
that maps data points into an intermediate space, and then learns a set of linear decision boundaries
that maximize the margin. This is depicted in Figure 1-bottom. In contrast, multiclass boosting
(MCBoost) [34] relies on a set of pre-defined codewords in an intermediate space, and then learns a
mapping to this space such that it maximizes the margin with respect to the boundaries defined by
those codewords. See Figure 1-top. Therefore, both boosting and SVM follow a two-step procedure:
(i) mapping data to some intermediate space, and (ii) determine the boundaries that separate the
classes. There are, however, two notable differences: 1) while K-SVM aims to learn only the
boundaries, MCBoost effort is on learning the mapping and 2) in K-SVM the intermediate space
typically has infinite dimensions, while in MCBoost the space has M or M ? 1 dimensions, where
M is the number of classes.
1
Data
The intermediate space (called prediction space)
in the exposed duality has some interesting properties. In particular, the final classifier decision MCBoost:
is based on the representation of data points in
this prediction space where the decision boundselect linear classifiers
learn a transformation
aries are linear. An accurate classification by
these simple boundaries suggests that the input data points must be very-well separated in
this space. Given that in the case of boosting
this space has limited dimensions, e.g. M or
M ? 1, this suggests that we can potentially use
learn a linear classifier
the predictor of MCBoost as a discriminant di- K-SVM: select a transformation
mensionality reduction operator. However, the
dimension of MCBoost is either M or M ? 1 Figure 1: Duality between multiclass boosting and
which restricts application of this operator as a SVM.
general dimensionality reduction operator. In
addition, according to the proposed duality, each of K-SVM or Boosting
SVCL optimizes only one of the
two components, i.e. mapping and decision boundaries. Because of this, extra care needs to be put in
manually choosing the right kernel in K-SVM; and in MCBoost, we may not even be able to learn a
good mapping if we preset some bad boundaries.
We can potentially overcome these limitations by combining boosting and SVM to jointly learn both
the mapping and linear classifiers for a prediction space of arbitrary dimension d. We note that this
is not a straightforward merge of the two methods as this can lead to a computationally prohibitive
method; e.g. imagine having to solve the quadratic optimization of K-SVM before each iteration of
boosting. In this paper, we propose a new algorithm, Large-mArgin Discriminant DimEnsionality
Reduction (LADDER), to efficiently implement this hybrid approach using a boosting-like method.
LADDER is able to learn both the mapping and the decision boundaries in a margin maximizing
objective function that is adjustable to any number of dimensions. Experiments show that the resulting
embedding can significantly improve tasks such as hashing and image/scene classification.
Related works: This paper touches several topics such as dimensionality reduction, classification,
embedding and representation learning. Due to space constraints we present only a brief overview
and comparison to previous work.
Dimensionality reduction has been studied extensively. Unsupervised techniques, such as principal
component analysis (PCA), non-negative matrix factorization (NMF), clustering, or deep autoencoders, are conceptually simple and easy to implement, but may eliminate discriminant dimensions
of the data and result in sub-optimal representations for classification. Discriminant methods, such as
sequential feature selection techniques [31], neighborhood components analysis [11], large margin
nearest neighbors [42] or maximally collapsing metric learning [37] can require extensive computation
and/or fail to guarantee large margin discriminant data representations.
The idea of jointly optimizing the classifiers and the embedding has been extensively explored in
embedding and classification literature, e.g. [7, 41, 45, 43]. These methods, however, typically
rely on linear data transformation/classifier, requires more complex semi-definite programming [41]
or rely on Error Correcting Output Codes (ECOC) approach [7, 45, 10] which has shown inferior
performance compared to direct multiclass boosting methods [34, 27]. In comparison, we note that
the proposed method (1) is able to learn a very non-linear transformation through boosting predictor,
e.g. boosting deep decision trees; and, (2) relies on direct multiclass boosting that optimizes a margin
enforcing loss function. Another example of jointly learning the classifiers and the embedding is
multiple kernel learning (MKL) literature, e.g. [12, 36]. In these methods, a new kernel is learned as
a linear combination of fixed basis functions. Compared with LADDER, 1) the basis functions are
data-driven and not fixed, and 2) our method is also able to combine weak learners and form novel
basis functions tailored for the current task. Finally, it is also possible to jointly learn the classifiers
and embedding using deep neural networks. This, however, requires large number of training data
and can be computationally very intensive. In addition the proposed LADDER method is a meta
algorithm that can be used to further improve the deep networks, e.g. by boosting of the deep CNNs.
2
64
2
Duality of boosting and SVM
Consider an M -class classification problem, with training set D = {(xi , zi )}ni=1 , where zi ?
{1 . . . M } is the class of example xi . The goal is to learn a real-valued (multidimensional) function
f (x) to predict the class label z of each example x. This is formulated as the predictor f (x) that
minimizes the risk defined in terms of the expected loss L(z, f (x)):
1X
R[f ] = EX,Z {L(z, f (x))} ?
L(zi , f (xi )).
(1)
n i
Different algorithms vary in their choice of loss functions and numerical optimization procedures. The
learned predictor has large margin if the loss L(z, f (x)) encourages large values of the classification
margin. For binary classification, f (x) ? R, z ? {1, 2}, the margin is defined as M(xi , zi ) =
yi f (xi ), where yi = y(zi ) ? {?1, 1} is the codeword of class zi . The classifier is then F (x) =
H(sign[f (x)]) where H(+1) = 1 and H(?1) = 2.
The extension to M-ary classification requires M codewords. These are defined in a multidimensional
space, i.e. as y k ? Rd , k = 1 . . . M where commonly d = M or d = M ? 1. The predictor is then
f (x) = [f1 (x), f2 (x) . . . fd (x)] ? Rd , and the margin is defined as
1
(2)
hf (xi ), y zi i ? maxhf (xi ), y l i ,
M(xi , zi ) =
l6=zi
2
where h?, ?i is the Euclidean dot product. Finally, the classifier is implemented as
F (x) = arg
max
k?{1,...,M }
hy k , f (x)i.
(3)
Note that the binary equations are the special cases of (2)-(3) for codewords {?1, 1}.
Mutliclass Boosting: MCBoost [34] is a multiclass boosting method that uses a set of unit vectors
as codewords ? forming a regular simplex in RM ?1 ?, and the exponential loss
L(zi , f (xi )) =
M
X
1
e? 2 [hy
zi
,f (xi )i?hy j ,f (xi )i]
.
(4)
j=1,j6=zi
For M = 2, this reduces to the loss L(zi , f (xi )) = e?y
zi
f (xi )
of AdaBoost [9].
Given a set, G, of weak learners g(x) ? G : X ? RM ?1 , MCBoost minimizes (1) by gradient
descent in function space. In each iteration MCBoost computes the directional derivative of the risk
for updating f (x) along the direction of g(x),
n
?R[f + g]
1 X
hg(xi ), w(xi )i,
(5)
?R[f ; g] =
=
?
?
2n
=0
PM
j
zi
? 12 hy zi ?y j ,f (xi )i
?R
where w(xi ) = j=1 (y ? y )e
the optimal step size toward that direction are then
g?
=
i=1
M ?1
. The direction of steepest descent and
arg min ?R[f ; g] ?? = arg min R[f + ?g ? ].
g?G
??R
(6)
The predictor is finally updated with f := f + ?? g ? . This method is summarized in Algorithm 1. As
previously mentioned, it reduces to AdaBoost [9] for M = 2, in which ?? has closed form.
Mutliclass Kernel SVM (MC-KSVM) : In the support vector machine (SVM) literature, the margin
is defined as
M(xi , wzi )
= h?(xi ), wzi i ? maxh?(xi ), wl i,
l6=zi
(7)
where ?(x) is a feature transformation, usually defined indirectly through a kernel k(x, x0 ) =
h?(x), ?(x0 )i, and wl (l = 1 . . . M ) are a set of discriminative projections. Several algorithms have
been proposed for multiclass SVM learning [39, 44, 17, 5]. The classical formulation by Vapnik finds
the projections that solve:
?
PM
P
2
? minw1 ...wM
l=1 kwl k2 + C
i ?i
(8)
s.t.
h?(xi ), wzi i ? h?(xi ), wl i ? 1 ? ?i , ?(xi , zi ) ? D, l 6= zi ,
?
?i ? 0 ?i.
3
Algorithm 1 MCBoost
Input: Number of classes M , number of iterations Nb , codewords {y 1 , . . . , y M } ? RM ?1 , and
dataset D = {(xi , zi )}ni=1 where zi ? {1 . . . M } is label of example xi .
Initialization: Set f = 0 ? RM ?1 .
for t = 1 to Nb do
Find the best weak learner g ? (x) and optimal step size ?? using (6).
Update f (x) := f (x) + ?? g ? (x).
end for
Output: F (x) = arg maxk hf (x), y k i
Rewriting the constraints as
?i ? max[0, 1 ? (h?(xi ), wzi i ? maxh?(xi ), wl i)],
l6=zi
and using the fact that the objective function is monotonically increasing in ?i , this is identical to
solving the problem
P
PM
2
(9)
minw1 ...wM
i bh?(xi ), wzi i ? maxl6=zi h?(xi ), wl ic+ + ?
l=1 kwl k2 ,
where bxc+ = max(0, 1 ? x) is the hinge loss, and
P ? = 1/C. Hence, MC-KSVM minimizes the
risk R[f ] subject to a regularization constraint on l kwl k22 . The predictor of the multiclass kernel
SVM (MC-KSVM) is then defined as
FM C?KSV M (x) = arg max h?(x), wl? i.
l=1..M
(10)
Duality: The discussion of the previous sections unveils an interesting duality between multiclass
boosting and SVM. Since (7) and (10) are special cases of (2) and (3), respectively, the MC-SVM is a
special case of the formulation of Section 2, with predictor f (x) = ?(x) and codewords y l = wl .
This leads to the duality of Figure 1. Both boosting and SVM implement a classifier with a set of
linear decision boundaries on a prediction space F. This prediction space is the range space of the
predictor f (x). The linear decision boundaries are the planes whose normals are the codewords
y l . For both boosting and SVM, the decision boundaries implement a large margin classifier in F.
However, the learning procedure is different. For the SVM, examples are first mapped into F by a
pre-defined predictor. This is the feature transformation ?(x) that underlies the SVM kernel. The
codewords (linear classifiers) are then learned so as to maximize the margin. On the other hand, for
boosting, the codewords are pre-defined and the boosting algorithm learns the predictor f (x) that
maximizes the margin. The boosting / SVM duality is summarized in Table 1.
Table 1: Duality between MCBoost and MC-KSVM
predictor
codewords
MCBoost
learns f (x)
fix yi
MC-KSVM
fix ?(x)
learns wl
3
Discriminant dimensionality reduction
In this section, we exploit the multiclass boosting / SVM duality to derive a new family of discriminant
dimensionality reduction methods. Many learning problems require dimensionality reduction. This
is usually done by mapping the space of features X to some lower dimensional space Z, and then
learning a classifier on Z. However, the mapping from X to Z is usually quite difficult to learn.
Unsupervised procedures, such as principal component analysis (PCA) or clustering, frequently
eliminate discriminant dimensions of the data that are important for classification. On the other hand,
supervised procedures tend to lead to complex optimization problems and can be quite difficult to
implement. Using the proposed duality we argue that it is possible to use an embedding provided
by boosting or SVM. In case of SVM this embedding is usually infinite dimensional which can
make it impractical for some applications, e.g. hashing problem [20]. In case of boosting the
embedding, f (x), has a finite dimension d. In general, the complexity of learning a predictor f (x)
is inversely proportional to this dimension d, and lower dimensional codewords/predictors require
more sophisticated predictor learning. For example, convolutional networks such as [22] use the
4
Algorithm 2 Codeword boosting
Input: Dataset D = {(xi , zi )}ni=1 where zi ? {1 . . . M }
is label of example xi , n. of classes M , a predictor f (x) :
X ? Rd , n. of codeword learning iterations Nc and a set
of d dimensional codewords Y.
for t = 1 to Nc do
?
Compute ?R
?Y and find the best step size, ? by (12).
?
Update Y := Y ? ? dY.
Normalize codewords in Y to satisfy constraint of (11).
end for
Output: Codeword set Y
Figure 2: Codeword updates after a
gradient descent step
canonical basis of RM as codeword set, and a predictor composed of M neural network outputs.
This is a deep predictor, with multiple layers of feature transformation, using a combination of linear
and non-linear operations. Similarly, as discussed in the previous section, MCBoost can be used to
learn predictors of dimension M or M ? 1, by combining weak learners. A predictor learned by
any of these methods can be interpreted as a low-dimensional embedding. Compared to the classic
sequential approach of first learning an intermediate low dimensional SVCL
space Z and then learning
a predictor f : Z ? F = RM , these methods learn the classifier directly in a low-dimensional
prediction space, i.e. F = Z. In the case of boosting, this leverages a classifier that explicitly
maximizes the classification margin for the solution of the dimensionality reduction problem.
The main limitation of this approach is that current multiclass boosting methods [34, 27] rely on a
fixed codeword dimension d, e.g. d = M in [27] or d = M ? 1 in [34]. In addition these codewords
are pre-defined and are independent of the input data, e.g. vertices of a regular simplex in RM or
RM ?1 [34]. In summary, the dimensionality of the predictor and codewords are tied to the number
of classes. Next, we propose a method that extends current boosting algorithms 1) to use embeddings
of arbitrary dimensions and 2) to learn the codewords (linear classifiers) based on the input data.
In principle, the formulation of section 2 is applicable to any codeword set and the challenge is to find
the optimal codewords for a target dimension d. For this, we propose to leverage the duality between
boosting and SVM. First, use boosting to learn the optimal predictor for a given set of codewords,
and second use SVM to learn the optimal codewords for the given predictor. This procedure, has
two limitations. First, although both are large margin methods, boosting and SVM use different loss
functions (exponential vs. hinge). Hence, the procedure is not guaranteed to converge. Second, an
algorithm based on multiple iterations of boosting and SVM learning is computationally intensive.
We avoid these problems by formulating the codeword learning problem in the boosting framework
rather than an SVM formulation. For this, we note that, given a predictor f (x), it is possible to learn
a set of codewords Y = {y 1 . . . y M } that guarantees large margins, under the exponential loss, by
solving
Pn
1
miny1 ...yM R[Y, f ] = 2n
i=1 L(Y, zi , f (xi ))
(11)
s.t.
ky k k = 1 ?k
P
zi
j
1
where L(Y, zi , f (xi )) = j6=zi e? 2 hy ?y ,f (xi )i . As is usual in boosting, we propose to solve this
optimization by a gradient descent procedure. Each iteration of the proposed codeword boosting
algorithm computes the risk derivatives with respect to all codewords and forms the matrix ?R
?Y =
h
i
?R[Y,f ]
?R[Y,f ]
? ?R
. . . ?yM . The codewords are then updated according to Y = Y ? ? ?Y where
?y 1
?R
? ? = arg min R Y ? ?
,f ,
(12)
?
?Y
is found by a line search. Finally, each codeword y l is normalized to satisfy the constraint of (11).
This algorithm is summarized in Algorithm 2.
Given this, we are ready to introduce an algorithm that jointly optimizes the codeword set Y and
predictor f . This is implemented using an alternate minimization procedure that iterates between
the following two steps. First, given a codeword set Y, determine the predictor f ? (x) of minimum
risk R[Y, f ]. This is implemented with MCBoost (Algorithm 1). Second, given the optimal predictor
5
3
Algorithm 3 LADDER
Input: number of classes M , dataset D = {(xi , zi )}ni=1 where zi ? {1 . . . M } is label of example
xi , number of predictor and codeword dimension d, number of boosting iteration Nb , number
codeword learning iteration Nc and number of interleaving rounds Nr .
Initialization: Set f = 0 ? Rd and initialize Y.
for t = 1 to Nr do
Use Y and run Nb iterations of MCBoost, Algorithm 1, to update f (x).
Use f (x) and run Nc iterations of gradient descent in Algorithm 2 to update Y.
end for
Output: Predictor f (x), codeword set Y and decision rule F (x) = arg maxk hf (x), y k i
f ? (x), determine the codeword set Y ? of minimum risk R[Y ? , f ? ]. This is implemented with
codeword boosting (Algorithm 2). Note that, unlike the combined SVM-Boosting solution, the two
steps of this algorithm optimize the common risk of (11). Since this risk encourages predictors
of large margin, the algorithm is denoted Large mArgin Discriminant DimEnsionality Reduction
(LADDER). The procedure is summarized in Algorithm 3.
Analysis: First, note that the sub-problems solved by each step of LADDER, i.e. the minimization of
R[Y, f ] given Y or f, are convex. However, the overall optimization of (11) is not convex. Hence,
the algorithm will converge to a local optimum, which depends on the initialization conditions. We
propose an initialization procedure motivated by the following intuition. If two of the codewords
are very close, e.g. y j ? y k , then hy j , f (x)i is very similar to hy k , f (x)i and small variations
of x may change the classification results of (3) from k to j and vice-versa. This suggests that
the codewords should be as distant from each other as possible. We thus propose to initialize the
MCBoost codewords with the set of unit vectors of maximum pair-wise distance, e.g.
max min ||y j ? y k || , ?j 6= k
(13)
y 1 ...y M j6=k
For d = M, these codewords can be the canonical basis of RM . We have implemented a barrier
method from [18] to obtain maximum pair-wise distance codeword sets for any d < M .
]
Second, Algorithm 2 has interesting intuitions. We start by rewriting the risk derivatives as ?R[Y,f
=
?y j
1 hy j ,f (x )i
P
i
(1?? )
1
?ij
f (xi )Li sij ij where Li = L(Y, zi , f (xi )), sij = P e 2 1 hyk ,f (xi )i , and ?ij = 1
i (?1)
2n
k6=zi
e2
if zi = j and ?ij = 0 otherwise. It follows that the update of each codeword along the gradient
]
ascent direction, ? ?R[Y,f
?y j , is a weighted average of the predictions f (xi ). Since ?ij is an indicator
of the examples xi in class j, the term (?1)?ij reflects the assignment of examples to the classes.
While each xi in class j contributes to the update of y j with a multiple of the prediction f (xi ),
this contribution is ?f (xi ) for examples in classes other than j. Hence, each example xi in class
j pulls y j towards its current prediction f (xi ), while pulling all other codewords in the opposite
direction. This is illustrated in Figure 2. The result is an increase of the dot-product hy j , f (xi )i,
while the dot-products hy k , f (xi )i ?k 6= j decrease. Besides encouraging correct classification, these
dot product adjustments maximize the multiclass margin. This effect is modulated by the weight
(1?? )
of the contribution of each point. This weight is the factor Li sij ij , which has two components.
The first, Li , is the loss of the current predictor f (xi ) for example xi . This measures how much
xi contributes to the current risk and is similar to the example weighting mechanism of AdaBoost.
Training examples are weighted, so as to emphasize those poorly classified by the current predictor
(1?? )
f (x). The second, sij ij , only affects examples xi that do not belong to class j. For these, the
weight is multiplied by sij . This computes a softmax-like operation among the codeword projections
of f (xi ) and is large when the projection along y j is one of the largest, and small otherwise. Hence,
among examples xi from classes other than j that have equivalent loss Li , the learning algorithm
weights more heavily those most likely to be mistakenly assigned to class j. In result, the emphasis
on incorrectly classified examples is modulated by how much class pairs are confused by the current
predictor. Examples from classes that are more confusable with class j receive larger weight for the
update of the latter.
6
60
MCBoost
LADDER
PCA+CLR
ProbPCA+CLR
KernelPCA+CLR
LPP+CLR
NPE+CLR
LDA+CLR
50
error rate
40
30
20
10
0
0
5
10
15
number of dimensions
20
25
30
Figure 3: Left: Initial codewords for all traffic sign classes. Middle: codewords learned by LADDER. Right:
Error rate evaluation with standard MCBoost classifier (CLR) with several dimensionality reduction techniques.
4
Experiments
We start with a traffic sign detection problem that allows some insight on the merits of learning
codewords from data. This experiment was based on ? 2K instances from 17 different types of traffic
signs in the first set of the Summer traffic sign dataset [25], which was split into training and test set.
Examples of traffic signs are shown in the left of figure 3. We also collected about 1, 000 background
images, to represent non-traffic sign images, leading to a total of 18 classes. The background class is
shown as a black image in figure 3-left and middle. All images were resized to 40 ? 40 pixels and
the integral channel method of [8] was used to extract 810 features per image.
The first experiment compared the performance of traditional multiclass boosting to LADDER. The
former was implemented by running MCBoost (Algorithm 1) for Nb = 200 iterations, using the
optimal solution of (13) as codeword set. LADDER was implemented with Algorithm 3, using
Nb = 2, Nc = 4, and Nr = 100. In both cases, codewords were initialized with the solution
of (13) and the initial assignment of codewords to classes was random. In each experiment, the
learning algorithm was initialized with 5 different random assignments. Figure 3 compares the initial
codewords (Left) to those learned by LADDER (Middle) for a 2-D embedding (d = 2). A video
showing the evolution of the codewords is available in the supplementary materials. The organization
of the learned codewords reflects the semantics of the various classes. Note, for example, how
LADDER clusters the codewords associated with speed limit signs, which were initially scattered
around the unit circle. On the other hand, all traffic sign codewords are pushed away from that of the
background image class. Within the traffic sign class, round signs are positioned in one half-space
and signs of other shapes on the other. Regarding discriminant power, a decision rule learned by
MCBoost achieved 0.44 ? 0.03 error rate, while LADDER achieved 0.21 ? 0.02. In summary,
codeword adaptation produces a significantly more discriminant prediction space.
This experiment was repeated for d ? [2, 27], with the results of Figure 3-right. For small d,
LADDER substantially improves on MCBoost (about half error rate for d ? 5). LADDER was
also compared to various classical dimensionality reduction techniques that do not operate on the
prediction space. These included PCA, LDA, Probabilistic PCA [33], Kernel PCA [35], Locally
Preserving Projections (LPP) [16], and Neighborhood Preserving Embedding (NPE) [15]. All
implementations were provided by [1]. For each method, the data was mapped to a lower dimension
d and classified using MCBoost. LADDER outperformed all methods for all dimensions.
Hashing and retrieval: Image retrieval is a classical problem in Vision [3, 4]. Encoding high dimensional feature vectors into short binary codes to enable large scale retrieval has gained momentum in
the last few years [6, 38, 23, 13, 24, 26]. LADDER enables the design of an effective discriminant
hash code for retrieval systems. To obtain a d-bit hash, we learn a predictor f (x) ? Rd . Each
predictor coordinate is then thresholded and mapped to {0, 1}. Retrieval is finally based on the
Hamming distance between these hash codes. We compare this hashing method to a number of
popular techniques on CIFAR-10 [21], which contains 60K images of ten classes. Evaluation was
based on the test settings of [26], using 1, 000 randomly selected images. Learning was based on
a random set of 2, 000 images, sampled from the remaining 59K. All images are represented as
512-dimensional GIST feature vectors [28]. The 1, 000 test images were used to query a database
containing the remaining 59K images.
7
Table 2: Left: Mean average precision (mAP) for CIFAR-10. Right: Classification accuracy on MIT
indoor scenes dataset.
Method
LSH
BRE
ITQunsup.
ITQsup.
MCBoost
KSH
LADDER
hash length (bits)
8
10
12
0.147
0.150
0.150
0.156
0.156
0.158
0.162
0.159
0.164
0.220
0.225
0.231
0.200
0.250
0.250
0.237 0.252
0.253
0.224 0.270 0.266
Method
Accuracy
RBoW [29]
SPM-SM [40]
HMP [2]
conv5+PCA+FV
conv5+MC-Boost+FV
conv5+LADDER+FV
37.9%
44.0%
47.6%
52.9%
52.8%
55.2%
Table 2-Left shows mean average precision (mAP) scores under different code lengths for LSH [6],
BRE [23], ITQ [13], MCBoost [34], KSH [26] and LADDER. Several conclusions can be drawn.
First, using a multiclass boosting technique with predefined equally spaced codewords of (13),
MCBoost, we observe a competitive performance; on par with popular approaches such as ITQ,
however slightly worst than KSH. Second, LADDER improves on MCBoost, with mAP gains that
range from 6 to 12%. This is due to its ability of LADDER to adjust/learn codewords according
to the training data. Finally, LADDER outperformed other popular methods for hash code lengths
? 10-bits. These gains are about 5 and 7% as compared to KSH, the second best method.
Scene understanding: In this experiment we show that LADDER can provide more efficient dimensionality reduction than regular methods such as PCA. For this we selected the scene understanding
pipeline of [30, 14] that is consists of deep CNNs [22, 19], PCA, Fisher Vectors(FV) and SVM. PCA
in this setting is necessary as the Fisher Vectors can become extremely high dimensional. We replaced
the PCA component by embeddings of MCBoost and LADDER and compared their performance
with PCA and other scene classification methods on the MIT Indoor dataset [32]. This is a dataset of
67 indoor scene categories where the standard train/test split contains 80 images for training and 20
images for testing per class. Table 2-Right summarizes performance of different methods. Again
even with plain MCBoost predictor we observe a competitive performance; on par with PCA. The
performance is then improved by LADDER by learning the embedding and codewords jointly.
5
Conclusions
In this work we present a duality between boosting and SVM. This duality is used to propose a novel
discriminant dimensionality reduction method. We show that both boosting and K-SVM maximize
the margin, using the combination of a non-linear predictor and linear classification. For K-SVM,
the predictor (induced by the kernel) is fixed and the linear classifier is learned. For boosting, the
linear classifier is fixed and the predictor is learned. It follows from this duality that 1) the predictor
learned by boosting is a discriminant mapping, and 2) by iterating between boosting and SVM it
should be possible to design better discriminant mappings. We propose the LADDER algorithm to
efficiently implement the two steps and learn an embedding of arbitrary dimension. Experiments
show that LADDER learns low-dimensional spaces that are more discriminant.
References
[1] Visualizing High-Dimensional Data Using t-SNE. JMLR, MIT Press, 2008.
[2] L. Bo, X. Ren, and D. Fox. Unsup. Feature Learn. for RGB-D Based Object Recognition. In ISER, 2012.
[3] J. Costa Pereira and N. Vasconcelos. On the Regularization of Image Semantics by Modal Expansion. In
Proc. IEEE CVPR, pages 3093?3099, 2012.
[4] J. Costa Pereira and N. Vasconcelos. Cross-modal domain adaptation for text-based regularization of image
semantics in image retrieval systems. Comput. Vision Image Understand., 124:123?135, July 2014.
[5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines.
JMLR, MIT Press, 2:265?292, 2002.
[6] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable
distributions. In Proc. ACM Symp. on Comp. Geometry, pages 253?262, 2004.
[7] O. Dekel and Y. Singer. Multiclass learning by prob. embeddings. In Adv. NIPS, pages 945?952, 2002.
[8] P. Dollar, Z. Tu, P. Perona, and S. Belongie. Integral channel features. In Proc. BMVC, 2009.
[9] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. Journal Comp. and Sys. Science, 1997.
8
[10] T. Gao and D. Koller. Multiclass boosting with hinge loss based on output coding. In ICML, 2011.
[11] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In Adv.
NIPS, pages 513?520, 2004.
[12] M. Gonen and E. Alpaydin. Multiple kernel learning algorithms. JMLR, MIT Press, 12:2211?2268, July
2011.
[13] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to
learning binary codes for large-scale image retrieval. (99):1?15, 2012.
[14] Y. Gong, L. Wang, R. Guo, and S. LazebniK. Multi-scale orderless pooling of multi-scale orderless pooling
of deep convolutional activation features. In Proc. ECCV, 2014.
[15] X. He, D. Cai, S. Yan, and H.-J. Zhang. Neighborhood preserving embedding. In Proc. IEEE ICCV, 2005.
[16] X. He and P. Niyogi. Locality preserving projections. In Adv. NIPS, 2003.
[17] C. Hsu and C. Lin. A comparison of methods for multiclass support vector machines. IEEE Trans. Neural
Netw., 13(2):415?425, 2002.
[18] S. J. Nocedal and Wright. Numerical Optimization. Springer Verlag, New York, 1999.
[19] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[20] D. Knuth. The art of computer programming: Sorting and searching, 1973.
[21] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report,
University of Toronto, Dept. of Computer Science, 2009.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Adv. NIPS, 2012.
[23] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In Adv. NIPS, volume 22,
pages 1042?1050. 2009.
[24] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing. IEEE Trans. Pattern Anal. Mach. Intell.,
34(6):1092?1104, 2012.
[25] F. Larsson, M. Felsberg, and P. Forssen. Correlating Fourier descriptors of local patches for road sign
recognition. IET Computer Vision, 5(4):244?254, 2011.
[26] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In Proc. IEEE
CVPR, pages 2074?2081, 2012.
[27] I. Mukherjee and R. E. Schapire. A theory of multiclass boosting. In Adv. NIPS, 2010.
[28] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope.
Int. Journal Comput. Vision, 42(3):145?175, 2001.
[29] S. N. Parizi, J. G. Oberlin, and P. F. Felzenszwalb. Reconfigurable models for scene recognition. In Proc.
IEEE CVPR, 2012.
[30] F. Perronnin, J. S?nchez, and T. Mensink. Improving the fisher kernel for large-scale image classification.
In Proc. ECCV, 2010.
[31] P. Pudil, J. Novovi?cov?, and J. Kittler. Floating search methods in feature selection. Pattern Recogn. Lett.,
pages 1119?1125, 1994.
[32] A. Quattoni and A. Torralba. Recognizing indoor scenes. Proc. IEEE CVPR, 2009.
[33] S. Roweis. EM Algorithms for PCA and SPCA. In Adv. NIPS, 1998.
[34] M. Saberian and N. Vasconcelos. Multiclass boosting: Theory and algorithms. In Adv. NIPS, 2011.
[35] B. Sch?lkopf, A. Smola, and K.-R. M?ller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Computations, pages 1299?1319, 1998.
[36] S. Sonnenburg, G. Ratsch, C. Schafer, and B. Scholkopf. Large scale multiple kernel learning. JMLR, MIT
Press, 7:1531?1565, Dec. 2006.
[37] M. Sugiyama. Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis.
JMLR, MIT Press, pages 1027?1061, 2007.
[38] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In Proc.
IEEE CVPR, pages 1?8, 2008.
[39] V. N. Vapnik. Statistical Learning Theory. John Wiley Sons Inc, 1998.
[40] N. Vasconcelos and N. Rasiwasia. Scene recognition on the semantic manifold. In Proc. ECCV, 2012.
[41] K. Q. Weinberger and O. Chapelle. Large margin taxonomy embedding for document categorization. In
Adv. NIPS. 2009.
[42] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification.
JMLR, MIT Press, pages 207?244, 2009.
[43] J. Weston, S. Bengio, and N. Usunier. Large scale image annotation: Learning to rank with joint word-image
embeddings. In Proc. ECML, 2010.
[44] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In Euro. Symp. On
Artificial Neural Networks, pages 219?224, 1999.
[45] B. Zhao and E. Xing. Sparse output coding for large-scale visual recognition. In Proc. IEEE CVPR, pages
3350?3357, 2013.
9
| 6458 |@word kulis:2 middle:3 dekel:1 seek:1 rgb:1 lpp:2 reduction:18 initial:3 liu:1 contains:2 score:1 document:1 current:8 com:3 guadarrama:1 goldberger:1 activation:1 must:1 john:1 distant:1 numerical:2 shape:2 enables:1 gist:1 update:8 v:1 hash:6 half:2 prohibitive:1 selected:2 plane:1 sys:1 steepest:1 short:1 iterates:1 boosting:57 toronto:1 zhang:1 along:3 direct:2 become:1 scholkopf:1 consists:1 combine:1 symp:2 introduce:1 manner:1 x0:2 expected:1 frequently:1 multi:3 ecoc:1 salakhutdinov:1 encouraging:1 increasing:1 provided:2 confused:1 schafer:1 maximizes:3 interpreted:1 minimizes:3 substantially:1 transformation:8 impractical:1 guarantee:2 multidimensional:2 grauman:1 classifier:24 rm:9 k2:2 unit:3 before:1 local:3 limit:1 encoding:1 mach:1 jiang:1 datar:1 merge:1 black:1 emphasis:1 initialization:4 studied:1 suggests:3 limited:1 factorization:1 range:2 testing:1 implement:6 definite:1 procedure:11 yan:1 significantly:3 projection:6 pre:6 road:1 regular:3 word:1 close:1 selection:2 operator:3 bh:1 put:1 risk:10 nb:6 optimize:1 equivalent:1 map:4 maximizing:3 straightforward:1 convex:2 correcting:1 rule:3 insight:1 pull:1 embedding:18 classic:1 searching:1 variation:1 coordinate:1 updated:2 pt:1 diego:1 controlling:1 imagine:1 target:1 programming:2 heavily:1 us:1 recognition:7 updating:1 mukherjee:1 database:2 labeled:1 bottom:1 preprint:1 solved:1 wang:2 worst:1 kittler:1 adv:9 sonnenburg:1 decrease:1 alpaydin:1 mentioned:1 intuition:2 transforming:1 complexity:1 saberian:2 unveils:1 solving:2 exposed:1 unsup:1 f2:1 learner:6 basis:5 svcl:2 multimodal:1 joint:1 various:2 represented:1 recogn:1 train:1 separated:1 fast:1 effective:1 reconstructive:1 query:1 artificial:1 choosing:1 neighborhood:3 caffe:1 whose:1 quite:2 widely:1 solve:3 valued:1 larger:1 supplementary:1 otherwise:2 cvpr:6 ability:1 niyogi:1 cov:1 jointly:7 final:1 indyk:1 karayev:1 eigenvalue:1 cai:1 propose:10 product:4 adaptation:2 wzi:5 tu:1 combining:3 holistic:1 poorly:1 roweis:2 normalize:1 ky:1 sutskever:1 cluster:1 optimum:1 darrell:2 produce:1 categorization:1 object:1 derive:2 felsberg:1 gong:2 ij:8 nearest:2 conv5:3 implemented:8 itq:2 direction:5 correct:1 cnns:2 enable:1 material:1 require:3 f1:1 fix:2 generalization:1 extension:1 around:1 ic:1 normal:1 wright:1 mapping:18 predict:1 algorithmic:1 gordo:1 achieves:1 vary:1 torralba:3 proc:13 outperformed:2 applicable:1 label:4 sensitive:2 largest:1 wl:8 vice:1 weighted:2 reflects:2 minimization:2 mit:8 aim:1 rather:1 avoid:1 pn:1 resized:1 hyk:1 rank:1 contrast:1 dollar:1 perronnin:2 typically:2 eliminate:2 initially:1 perona:1 koller:1 kernelized:1 selects:2 semantics:3 pixel:1 arg:7 classification:21 aforementioned:1 overall:1 denoted:1 k6:1 yahoo:2 among:2 art:1 special:3 initialize:2 uc:1 softmax:1 spatial:1 vasconcelos:5 having:1 manually:1 identical:1 unsupervised:2 icml:1 simplex:2 report:1 few:1 randomly:1 composed:1 intell:1 floating:1 replaced:1 geometry:1 detection:1 organization:1 fd:1 evaluation:2 adjust:1 hg:1 predefined:1 accurate:1 integral:2 necessary:1 fox:1 tree:1 euclidean:1 initialized:2 confusable:1 circle:1 girshick:1 instance:1 modeling:1 aries:1 assignment:3 vertex:1 predictor:43 krizhevsky:2 recognizing:1 combined:1 probabilistic:1 ym:2 again:1 containing:1 collapsing:1 derivative:3 leading:1 zhao:1 rasiwasia:1 li:5 kwl:3 summarized:4 coding:2 int:1 inc:2 satisfy:2 notable:1 explicitly:1 depends:1 closed:1 traffic:8 netflix:2 hf:3 wm:2 start:2 competitive:2 annotation:1 xing:1 jia:1 contribution:2 ni:4 accuracy:2 convolutional:4 descriptor:1 efficiently:2 spaced:1 directional:1 conceptually:1 weak:6 lkopf:1 mc:7 ren:1 comp:2 j6:3 ary:1 classified:3 quattoni:1 nuno:1 e2:1 associated:1 di:1 hamming:1 costa:3 sampled:1 dataset:7 gain:2 popular:4 hsu:1 dimensionality:18 improves:2 positioned:1 sophisticated:1 bre:2 hashing:8 supervised:2 follow:1 adaboost:3 modal:2 maximally:1 improved:1 bmvc:1 formulation:6 done:1 mensink:1 wei:1 smola:1 autoencoders:1 hand:3 mistakenly:1 touch:1 nonlinear:1 google:2 mkl:1 spm:1 lda:2 pulling:1 effect:1 k22:1 normalized:1 former:1 hence:5 regularization:3 assigned:1 evolution:1 semantic:1 illustrated:1 round:2 visualizing:1 encourages:2 inferior:1 procrustean:1 theoretic:1 mohammad:1 image:27 wise:2 lazebnik:2 novel:3 common:1 ji:1 overview:1 volume:1 discussed:1 belong:1 he:2 versa:1 rd:5 pm:3 similarly:1 iser:1 sugiyama:1 dot:4 lsh:2 chapelle:1 stable:1 maxh:2 kernelpca:1 larsson:1 optimizing:2 optimizes:3 driven:2 codeword:23 verlag:1 meta:1 binary:5 yi:3 clr:7 preserving:5 minimum:2 care:1 hmp:1 determine:3 maximize:5 ller:1 converge:2 monotonically:1 july:2 ii:1 semi:1 multiple:7 reduces:2 technical:1 cross:1 long:1 retrieval:7 cifar:2 lin:1 equally:1 prediction:12 underlies:1 oliva:1 vision:4 metric:2 arxiv:2 iteration:11 kernel:18 tailored:1 represent:1 achieved:2 dec:1 receive:1 addition:3 background:3 ratsch:1 jian:1 sch:1 extra:1 operate:1 unlike:1 envelope:1 ascent:1 induced:3 subject:1 tend:1 pooling:2 yang:1 leverage:2 intermediate:8 split:2 easy:1 embeddings:5 spca:1 bengio:1 affect:1 zi:34 architecture:1 fm:1 opposite:1 idea:1 regarding:1 multiclass:22 intensive:2 motivated:1 pca:14 effort:1 york:1 deep:9 iterating:1 extensively:2 locally:1 ten:1 category:1 schapire:2 restricts:1 canonical:2 npe:2 sign:13 per:2 drawn:1 rewriting:2 thresholded:1 nocedal:1 year:1 run:2 jose:2 prob:1 extends:1 family:1 patch:1 decision:14 dy:1 summarizes:1 pushed:1 bit:3 layer:2 guaranteed:1 distinguish:1 summer:1 quadratic:1 assemble:1 constraint:5 scene:11 hy:10 aspect:1 speed:1 fourier:1 min:4 formulating:1 extremely:1 according:3 alternate:1 combination:5 slightly:1 em:1 son:1 iccv:1 sij:5 pipeline:1 computationally:3 equation:1 previously:1 fail:1 mechanism:1 singer:2 merit:1 end:3 usunier:1 available:1 operation:2 multiplied:1 observe:2 away:1 indirectly:1 neighbourhood:1 weinberger:2 top:1 clustering:2 running:1 remaining:2 hinge:3 l6:3 exploit:1 establish:1 classical:3 objective:2 codewords:41 usual:1 nr:3 traditional:1 mirrokni:1 gradient:5 distance:4 separate:1 mapped:3 topic:1 manifold:1 argue:2 collected:1 discriminant:22 toward:1 enforcing:1 code:8 besides:1 length:3 nc:5 difficult:2 potentially:2 sne:1 taxonomy:1 negative:1 implementation:2 design:2 anal:1 adjustable:1 sm:1 finite:1 descent:5 ecml:1 incorrectly:1 maxk:2 hinton:3 ucsd:1 nvasconcelos:1 arbitrary:3 nmf:1 pair:3 extensive:1 imagenet:1 fv:4 learned:12 boost:1 nip:9 trans:2 able:4 usually:4 pattern:3 indoor:4 gonen:1 challenge:1 max:5 video:1 power:1 hybrid:1 rely:3 indicator:1 mcboost:29 scheme:1 improve:3 brief:1 ladder:30 inversely:1 ready:1 extract:1 text:1 literature:3 understanding:2 freund:1 loss:12 par:2 interesting:3 limitation:3 proportional:1 mensionality:1 shelhamer:1 principle:1 tiny:1 eccv:3 summary:2 last:1 understand:1 neighbor:2 saul:1 felzenszwalb:1 barrier:1 sparse:1 orderless:2 boundary:12 dimension:21 overcome:1 plain:1 lett:1 computes:3 commonly:1 san:1 emphasize:1 netw:1 correlating:1 belongie:1 xi:54 discriminative:1 fergus:1 search:2 iterative:1 iet:1 table:5 learn:20 channel:2 contributes:2 improving:1 expansion:1 complex:2 domain:1 ksvm:5 main:1 repeated:1 xu:1 euro:1 scattered:1 bxc:1 wiley:1 precision:2 sub:2 momentum:1 pereira:4 exponential:3 comput:2 tied:1 jmlr:6 weighting:1 watkins:1 learns:7 interleaving:1 donahue:1 embed:1 bad:1 reconfigurable:1 oberlin:1 showing:1 explored:1 svm:43 quantization:1 vapnik:2 sequential:2 gained:1 knuth:1 margin:29 sorting:1 locality:3 depicted:1 likely:1 forming:1 gao:1 nchez:1 visual:1 adjustment:1 bo:1 chang:1 springer:1 relies:2 acm:1 weston:2 goal:1 formulated:1 ksh:4 towards:1 fisher:4 change:1 included:1 infinite:2 preset:1 ksv:1 principal:2 called:1 total:1 duality:18 experimental:1 select:1 immorlica:1 support:4 guo:1 latter:1 modulated:2 crammer:1 dept:1 ex:1 |
6,034 | 6,459 | Ef?cient Globally Convergent Stochastic
Optimization for Canonical Correlation Analysis
Weiran Wang1?
Jialei Wang2?
Dan Garber1
Nathan Srebro1
2
1
Toyota Technological Institute at Chicago
University of Chicago
{weiranwang,dgarber,nati}@ttic.edu
jialei@uchicago.edu
Abstract
We study the stochastic optimization of canonical correlation analysis (CCA),
whose objective is nonconvex and does not decouple over training samples. Although several stochastic gradient based optimization algorithms have been recently proposed to solve this problem, no global convergence guarantee was provided by any of them. Inspired by the alternating least squares/power iterations
formulation of CCA, and the shift-and-invert preconditioning method for PCA, we
propose two globally convergent meta-algorithms for CCA, both of which transform the original problem into sequences of least squares problems that need only
be solved approximately. We instantiate the meta-algorithms with state-of-the-art
SGD methods and obtain time complexities that signi?cantly improve upon that
of previous work. Experimental results demonstrate their superior performance.
1
Introduction
Canonical correlation analysis (CCA, [1]) and its extensions are ubiquitous techniques in scienti?c research areas for revealing the common sources of variability in multiple views of the
same phenomenon. In CCA, the training set consists of paired observations from two views, denoted (x1 , y1 ), . . . , (xN , yN ), where N is the training set size, xi ? Rdx and yi ? Rdy for
i = 1, . . . , N . We also denote the data matrices for each view2 by X = [x1 , . . . , xN ] ? Rdx ?N and
Y = [y1 , . . . , yN ] ? Rdy ?N , and d := dx + dy . The objective of CCA is to ?nd linear projections
of each view such that the correlation between the projections is maximized:
max
u,v
u? ?xy v
s.t.
u? ?xx u = v? ?yy v = 1
where ?xy = N1 XY? is the cross-covariance matrix, ?xx = N1 XX? + ?x I and ?yy =
?y I are the auto-covariance matrices, and (?x , ?y ) ? 0 are regularization parameters [2].
(1)
1
?
N YY +
We denote by (u? , v? ) the global optimum of (1), which can be computed in closed-form. De?ne
?1
?1
T := ?xx2 ?xy ?yy2 ? Rdx ?dy ,
(2)
and let (?, ?) be the (unit-length) left and right singular vector pair associated with T?s largest
singular value ?1 . Then the optimal objective value, i.e., the canonical correlation between the
?1
?1
views, is ?1 , achieved by (u? , v? ) = (?xx2 ?, ?yy2 ?). Note that
?1
?1
?1 = kTk ?
?xx2 X
?yy2 Y
? 1.
Furthermore, we are guaranteed to have ?1 < 1 if (?x , ?y ) > 0.
The ?rst two authors contributed equally.
We assume that X and Y are centered at the origin for notational simplicity; if they are not, we can center
them as a pre-processing operation.
?
2
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Time complexities of different algorithms
for achieving ?-suboptimal solution (u, v) to
CCA, i.e., min (u? ?xx u? )2 , (v? ?yy v? )2 ? 1 ? ?. GD=gradient descent, AGD=accelerated
GD, SVRG=stochastic variance reduced gradient, ASVRG=accelerated SVRG. Note ASVRG provides speedup over SVRG only when ?
? > N , and we show the dominant term in its complexity.
Algorithm
Least squares solver
Time complexity
?21
1
? dN ?
O
? ?2 ??
AppGrad [3]
GD
(local)
2 ? log
?
? 1 22
?
1
1
? dN ?
CCALin [6]
AGD
? ?2 ??
O
2 ? log
?
2
1
2
?
2
?
2 1
1
?
O dN ?
This work:
AGD
? ?2 ??2 ? log ?
2
1
Alternating least
2 2
?1
2 1
?
squares (ALS)
SVRG
O d(N + ?
? ) ?2 ??2 ? log ?
2
1
2
?
2
?
2 1
1
?
O d N?
ASVRG
? ?2 ??2 ? log ?
? q 1 2
2 1
1
? dN ?
?
log
O
This work:
AGD
? ?1 ??
2
?
Shift-and-invert
1
2
?
O d N + (?
? ?1 ??2 ) ? log2 ?1
SVRG
preconditioning (SI)
q
?
2 1
1
? dN 43 ?
?
log
O
ASVRG
? ?1 ??
?
2
For large and high dimensional datasets, it is time and memory consuming to ?rst explicitly form
the matrix T (which requires eigen-decomposition of the covariance matrices) and then compute its
singular value decomposition (SVD). For such datasets, it is desirable to develop stochastic algorithms that have ef?cient updates, converges fast, and takes advantage of the input sparsity. There
have been recent attempts to solve (1) based on stochastic gradient descent (SGD) methods [3, 4, 5],
but none of these work provides rigorous convergence analysis for their stochastic CCA algorithms.
The main contribution of this paper is the proposal of two globally convergent meta-algorithms for
solving (1), namely, alternating least squares (ALS, Algorithm 2) and shift-and-invert preconditioning (SI, Algorithm 3), both of which transform the original problem (1) into sequences of least
squares problems that need only be solved approximately. We instantiate the meta algorithms with
state-of-the-art SGD methods and obtain ef?cient stochastic optimization algorithms for CCA.
In order to measure the alignments between an approximate solution (u, v) and the optimum
(u? , v? ), we assume that T has a positive singular value gap ? := ?1 ? ?2 ? (0, 1] so its top
left and right singular vector pair is unique (up to a change of sign).
Table 1 summarizes the time complexities of several algorithms for achieving ?-suboptimal alignmax max(kxi k2 , kyi k2 )
i
ments, where ?
? = min(?
is the upper bound of condition numbers of least squares
min (?xx ), ?min (?yy ))
3
? to hide poly-logarithmic dependencies (see
problems solved in all cases. We use the notation O(?)
Sec. 3.1.1 and Sec. 3.2.3 for the hidden factors). Each time complexity may be preferrable in certain
regime depending on the parameters of the problem.
Notations We use ?i (A) to denote the i-th largest singular value of a matrix A, and use ?max (A)
and ?min (A) to denote the largest and smallest singular values of A respectively.
2
Motivation: Alternating least squares
Our solution to (1) is inspired by the alternating least squares (ALS) formulation of CCA [7, Algorithm 5.2], as shown in Algorithm 1. Let the nonzero singular values of T be 1 ? ?1 ? ?2 ?
? ? ? ? ?r > 0, where r = rank(T) ? min(dx , dy ), and the corresponding (unit-length) left and right
singular vector pairs be (a1 , b1 ), . . . , (ar , br ), with a1 =? and b1 = ?. De?ne
0 T
C=
? Rd?d .
(3)
T? 0
3
For the ALS meta-algorithm, its enough to consider a per-view conditioning. And when using AGD as the
least squares solver, the time complexities dependends on ?max (?xx ) instead, which is less than maxi ?xi ?2 .
2
Algorithm 1 Alternating least squares for CCA.
Input: Data matrices X ? Rdx ?N , Y ? Rdy ?N , regularization parameters (?x , ?y ).
? 0 ? Rdy .
? 0 ? Rdx , v
Initialize u
q
p
?
?0
?
??
? 0? ?yy v
? 0/ u
u
,
v
?
v
/
?
v
u0 ? u
0
0
0
xx
0
for t = 1, 2, . . . , T do
? t ? ??1
u
xx ?xy vt?1
?
? t ? ??1
v
yy ?xy ut?1
q
p
?
?t
?
??
? t? ?yy v
? t/ u
u
,
v
?
v
/
v
?
ut ? u
t
t
t
xx
t
end for
Output: (uT , vT ) ? (u? , v? ) as T ? ?.
o
? , ?
?
?
0
0
o
n
?
?
?
?
?0 ? ?0 /
?0
, ? 0 ? ? 0 /
? 0
n
n
1
1
?
?2
? ? ?xx
?xy ?yy2 ? t?1
?
t
o
o
? 21 ?
? 12
? ? ?yy
?
?
?
?
xx
t
t?1
xy
o
n
?
?
?
?
?t ? ?t /
?t
, ? t ? ? t /
? t
n
{(?T , ? T ) ? (?, ?)}
It is straightforward to check that the nonzero eigenvalues of C are:
?1 ? ? ? ? ? ?r ? ??r ? ? ? ? ? ??1 ,
a1
ar
ar
1
1
1
?
?
?
with corresponding eigenvectors 2
, ..., 2
, 2
, ...,
b1
br
?br
?1
2
a1
?b1
.
The key observation is that Algorithm 1 effectively runs a variant of power iterations on C to extract
its top eigenvector. To see this, make the following change of variables
1
1
1
1
2
2
2
2
? = ?yy
? = ?xx
? t,
?t.
u
v
?
(4)
ut ,
? t = ?yy
vt ,
?
?t = ?xx
t
t
Then we can equivalently rewrite the steps of Algorithm 1 in the new variables as in {} of each line.
Observe that the iterates are updated as follows from step t ? 1 to step t:
? /||?
? ||
?
0 T
?
?t
?t?1
?
t
t
t
(5)
?
,
?
? || .
? /||?
?
?t
? t?1
T? 0
?
?
t
t
t
Except for the special normalization steps which rescale the two sets of variables separately, Algorithm 1 is very similar to the power iterations [8].
We show the convergence rate of ALS below (see its proof in Appendix A). The ?rst measure of
? 2
2
?
progress is the alignment of ?t to ? and the alignment of ? t to ?, i.e., (??
t ?) = (ut ?xx u )
?
and (? t ?)2 = (vt? ?yy v? )2 . The maximum value for such alignments is 1, achieved when the
iterates completely align with the optimal solution. The second natural measure of progress is the
objective of (1), i.e., u?
t ?xy vt , with the maximum value being ?1 .
? 2
?
? 2
Theorem 1 (Convergence
ofAlgorithm 1). Let ? := min (u?
> 0.4
0 ?xx u ) , (v0 ?yy v )
2
?1
1
?
? 2
?
? 2
Then for t ? ? ?2 ??
?
2 log
?? ?, we have in Algorithm 1 that min (ut ?xx u ) , (vt ?yy v )
1
2
1 ? ?, and u?
t ?xy vt ? ?1 (1 ? 2?).
Remarks We have assumed a nonzero singular value gap in Theorem 1 to obtain linear convergence in both the alignments and the objective. When there exists no singular value gap, the top
singular vector pair is not unique and it is no longer meaningful to measure the alignments. Nonetheless, it is possible to extend our proof to obtain sublinear convergence for the objective in this case.
Observe that, besides the steps of normalization to unit length, the basic operation in each iteration
1
?
?1 1
?
? t ? ??1
of Algorithm 1 is of the form u
xx ?xy vt?1 = ( N XX + ?x I)
N XY vt?1 , which is
equivalent to solving the following regularized least squares (ridge regression) problem
N
2 ?x
2 ? x
1
1 X 1 ?
2
2
?
?
u? X ? vt?1
min
u xi ? vt?1
Y
+
yi +
kuk ? min
kuk . (6)
u
u
2N
2
N i=1 2
2
In the next section, we show that, to maintain the convergence of ALS, it is unnecessary to solve the
least squares problems exactly. This enables us to use state-of-the-art SGD methods for solving (6)
to suf?cient accuracy, and to obtain a globally convergent stochastic algorithm for CCA.
4
One can show that ? is bounded away from 0 with high probability using random initialization (u0 , v0 ).
3
Algorithm 2 The alternating least squares (ALS) meta-algorithm for CCA.
Input: Data matrices X ? Rdx ?N , Y ? Rdy ?N , regularization parameters (?x , ?y ).
? 0 ? Rdy .
? 0 ? Rdx , v
Initialize u
q
p
?
?
? 0,
?0
?
?0,
??
? 0? ?yy v
?0 ? u
? 0/ u
,
v
?
v
/
u0 ? u
v0 ? v
u
?
v
u
0
0
0
xx
0
for t = 1, 2, . . . , T do
2 ?x
1
2
?
u? X ? vt?1
? t?1 , and output
Solve min ft (u) :=
kuk with initialization u
Y
+
u
2N
2
? t satisfying ft (?
ut ) ? minu ft (u) + ?.
approximate solution u
2 ?y
1
2
v ? Y ? u ?
+
? t?1 , and output
kvk with initialization v
Solve min gt (v) :=
t?1 X
v
2N
2
? t satisfying gt (?
vq
)
?
min
g
(v)
+
?.
approximate solution v
t
v t
p
?? v
?
?
??
?
? t/ u
u
,
v
?
v
/
v
?
ut ? u
t
t
xx t
yy ? t
t
t
end for
Output: (uT , vT ) is the approximate solution to CCA.
3
3.1
Our algorithms
Algorithm I: Alternating least squares (ALS) with variance reduction
Our ?rst algorithm consists of two nested loops. The outer loop runs inexact power iterations while
the inner loop uses advanced stochastic optimization methods, e.g., stochastic variance reduced
gradient (SVRG, [9]) to obtain approximate matrix-vector multiplications. A sketch of our algorithm
is provided in Algorithm 2. We make the following observations from this algorithm.
Connection to previous work At step t, if we optimize ft (u) and gt (v) crudely by a single batch
? t?1 ), we obtain the following update rule:
gradient descent step from the initialization (?
ut?1 , v
q
? t/ u
? t?1 ? Y? vt?1 )/N,
? t?1 ? 2? X(X? u
?t ? u
?t
??
ut ? u
u
t ?xx u
q
?t
? t? ?yy v
? t?1 ? X? ut?1 )/N,
?t/ v
?t ? v
? t?1 ? 2? Y(Y? v
vt ? v
v
where ? > 0 is the stepsize (assuming ?x = ?y = 0). This coincides with the AppGrad algorithm
of [3, Algorithm 3], for which only local convergence is shown. Since the objectives ft (u) and gt (v)
decouple over training samples, it is convenient to apply SGD methods to them. This observation
motivated the stochastic CCA algorithms of [3, 4]. We note however, no global convergence guarantee was shown for these stochastic CCA algorithms, and the key to our convergent algorithm is to
solve the least squares problems to suf?cient accuracy.
Warm-start Observe that for different t, the least squares problems ft (u) only differ in their targets
as vt changes over time. Since vt?1 is close to vt (especially when near convergence), we may use
? t as initialization for minimizing ft+1 (u) with an iterative algorithm.
u
Normalization p
At the end of each outer loop, Algorithm 2 implements exact normalization of the
?
? t/ u
? t = N1 (?
? t to ensure the constraints, where u
??
??
form ut ? u
u?
u?
t ?xx u
t X)(?
t X) +
t ?xx u
2
?
? t X. However, this does not inut k requires computing the projection of the training set u
?x k?
troduce extra computation because we also compute this projection for the batch gradient used by
SVRG (at the beginning of time step t + 1). In contrast, the stochastic algorithms of [3, 4] (possibly
adaptively) estimate the covariance matrix from a minibatch of training samples and use the estimated covariance for normalization. This is because their algorithms perform normalizations after
each update and thus need to avoid computing the projection of the entire training set frequently.
But as a result, their inexact normalization steps introduce noise to the algorithms.
Input sparsity For high dimensional sparse data (such as those used in natural language processing [10]), an advantage of gradient based methods over the closed-form solution is that the former
takes into account the input sparsity. For sparse inputs, the time complexity of our algorithm depends
on nnz(X, Y), i.e., the total number of nonzeros in the inputs instead of dN .
Canonical ridge When (?x , ?y ) > 0, ft (u) and gt (v) are guaranteed to be strongly convex due
to the ?2 regularizations, in which case SVRG converges linearly. It is therefore bene?cial to use
4
small nonzero regularization for improved computational ef?ciency, especially for high dimensional
datasets where inputs X and Y are approximately low-rank.
Convergence By the analysis of inexact power iterations where the least squares problems are
solved (or the matrix-vector multiplications are computed) only up to necessary accuracy, we provide the following theorem for the convergence of Algorithm 2 (see its proof in Appendix B). The
key to our analysis is to bound the distances between the iterates of Algorithm 2 and that of Algorithm 1 at all time steps, and when the errors of the least squares problems are suf?ciently small (at
the level of ? 2 ), the iterates of the two algorithms have the same2quality.
?1
2
Theorem 2 (Convergence of Algorithm 2). Fix T ? ? ?2 ??
2 log
?? ?, and set ?(T ) ?
2
1
2
? 2 ?2r
(2?1 /?r )?1
in Algorithm 2. Then we have u?
= vT? ?yy vT = 1,
T ?xx uT
128 ?
(2?1 /?r )T ?1
? 2
?
? 2
? 1 ? ?, and u?
min (u?
T ?xx u ) , (vT ?yy v )
T ?xy vT ? ?1 (1 ? 2?).
3.1.1 Stochastic optimization of regularized least squares
We now discuss the inner loop of Algorithm 2, which approximately solves problems of the form (6).
Owing to the ?nite-sum structure of (6), several stochastic optimization methods such as SAG [11],
SDCA [12] and SVRG [9], provide linear convergence rates. All these algorithms can be readily applied to (6); we choose SVRG since it is memory ef?cient and easy to implement. We also apply the
recently developed accelerations techniques for ?rst order optimization methods [13, 14] to obtain
an accelerated SVRG (ASVRG) algorithm. We give the sketch of SVRG for (6) in Appendix C.
2
PN
2
Note that f (u) = N1 i=1 f i (u) where each component f i (u) = 21 u? xi ? v? yi + ?2x kuk
2
is kxi k -smooth, and f (u) is ?min (?xx )-strongly convex5 with ?min (?xx ) ? ?x . We show in
Appendix D that the initial suboptimality for minimizing ft (u) is upper-bounded by constant when
using the warm-starts. We quote the convergence rates of SVRG [9] and ASVRG [14] below.
? satisfying6 E[f (?
Lemma 3. The SVRG algorithm [9] ?nds a vector u
u)] ? minu f (u) ? ? in time
2
maxi kxi k
1
O dx (N + ?x ) log ? where ?x = ?min (?xx ) . The ASVRG algorithm [14] ?nds a such solution
?
in time O dx N ?x log 1? .
Remarks As mentioned in [14], the acceleration version provides speedup over normal SVRG
only when ?x > N and we only show the dominant term in the above complexity.
By combining the iteration complexity of the outer loop (Theorem 2) and the time
complexity
of the inner loop (Lemma
3), we obtain the total time complexity of
2 2
2 2
?
?
2
1
1
? d N ? 2?1 2 ? log2 1
? d (N + ?) 2 2 ? log
for
ALS+SVRG
and
O
for
O
?
?
?1 ??2
?1 ??2
2
maxi kyi k2
i kxi k
? hides poly-logarithmic depenALS+ASVRG, where ? := max max
and O(?)
?min (?xx ) , ?min (?yy )
dences on ?1 and ?1r . Our algorithm does not require the initialization to be close to the optimum
and converges globally. For comparison,thelocally convergent AppGrad
has a time complexity
2
? dN ?? 2?1 2 ? log 1 , where ?? := max ?max (?xx ) , ?max (?yy ) . Note,
[3, Theorem 2.1] of O
?
?min (?xx )
?min (?yy )
? ??
1
2
in this complexity, the dataset size N and the least squares condition number ?? are multiplied together because AppGrad essentially uses batch gradient descent as the least squares solver. Within
our framework, we can use accelerated gradient descent(AGD, [15]) instead and obtain
a globally
? ? 2 2
2 1
1
?
?
convergent algorithm with a total time complexity of O dN ? ?2 ??2 ? log ? .
1
3.2
2
Algorithm II: Shift-and-invert preconditioning (SI) with variance reduction
The second algorithm is inspired by the shift-and-invert preconditioning method for PCA [16, 17].
Instead of running power iterations on C as de?ned in (3), we will be running power iterations on
?1
?I
?T
?1
? Rd?d ,
(7)
M? = (?I ? C) =
?T? ?I
5
We omit the regularization in these constants, which are typically very small, to have concise expressions.
The expectation is taken over random sampling of component functions. High probability error bounds
can be obtained using the Markov?s inequality.
6
5
where ? > ?1 . It is straightforward to check that M? is positive de?nite and its eigenvalues are:
1
1
1
1
? ??? ?
? ??? ?
? ??? ?
,
? ? ?1
? ? ?r
? + ?r
? + ?1
a1
ar
ar
a1
, . . . , ?12
, . . . , ?12
, . . . , ?12
.
with eigenvectors ?12
b1
br
?br
?b1
The main idea behind shift-and-invert power iterations is that when ? ? ?1 = c(?1 ? ?2 ) with c ?
O(1), the relative eigenvalue gap of M? is large and so power iterations on M? converges quickly.
Our shift-and-invert preconditioning (SI) meta-algorithm for CCA is sketched in Algorithm 3 (in
Appendix E due to space limit) and it proceeds in two phases.
Phase I: shift-and-invert preconditioning for eigenvectors of M?
? and starting from an over-estimate of ?1 (1 + ?
?
Using an estimate of the singular value gap ?
suf?ces), the algorithm gradually shrinks ?(s) towards ?1 by crudely estimating the leading eigenvector/eigenvalues of each M?(s) along the way and shrinking the gap ?(s) ? ?1 , until we reach
a ?(f ) ? (?1 , ?1 + c(?1 ? ?2 )) where c ? O(1). Afterwards, the algorithm ?xes ?(f ) and runs
inexact power iterations on M?(f ) to obtain an accurate estimate of its leading eigenvector. Note
" 1
#
2
?
u
?
xx t
in this phase, power iterations implicitly operate on the concatenated variables ?12
and
1
2
#
" 1
?t
?yy
v
2
1
1
ut
?xx
2
2
?1
and ?yy
).
in Rd (but without ever computing ?xx
1
2
2
?yy vt
Matrix-vector multiplication The matrix-vector multiplications in Phase I have the form
?1
??xx ??xy
?t
u
?xx
ut?1
?
,
(8)
?t
v
?yy
vt?1
??yy
???
xy
where ? varies over time in order to locate ?(f ) . This is equivalent to solving
1 ? ? ??xx ??xy
?t
u
u
? min
? u? ?xx ut?1 ? v? ?yy vt?1 .
u v
?t
v
v
???
??yy
u,v 2
xy
3.2.1
And as in ALS, this least squares problem can be further written as ?nite-sum:
N
1 X i
min ht (u, v) =
h (u, v)
where
(9)
u,v
N i=1 t
1 ? ? ? xi x?
?xi yi?
u
i + ?x I
? u? ?xx ut?1 ? v? ?yy vt?1 .
u v
hit (u, v) =
v
?yi x?
? yi yi? + ?y I
2
i
We could directly apply SGD methods to this problem as before.
Normalization The normalization steps in Phase I have the form
q
?
?t
ut
u
?t,
? t? ?yy v
?t + v
??
? 2
u
t ?xx u
?t
vt
v
and so the following remains true for the normalized iterates in Phase I:
?
for t = 1, . . . , T.
(10)
u?
t ?xx ut + vt ?yy vt = 2,
Unlike the normalizations in ALS, the iterates ut and vt in Phase I do not satisfy the original CCA
constraints, and this is taken care of in Phase II.
We have the following convergence guarantee for Phase I (see its proof in Appendix F).
? :=
Theorem 4 (Convergence of Algorithm 3, Phase I). Let ? = ?1 ? ?2 ? (0, 1], and ?
1
?
?
?
? 2
?
u
?
u
+
v
?
v
>
0,
and
?
?
[c
?,
c
?]
where
0
<
c
?
c
?
1.
Set
xx
yy
1
2
1
2
0
0
4
m2 ?1
m1 ?1
4
?
?
1
?
5
128
?
m1 = ?8 log 16
? ? min 3084
in
, 4?10 18
?
? ?, m2 = ? 4 log ?
? ? 2 ?, and ?
18
Algorithm 3. Then the (uT , vT ) output by Phase I of Algorithm 3 satis?es (10) and
?2
1 ?
(uT ?xx u? + vT? ?yy v? )2 ? 1 ? ,
(11)
4
64
1
and the number of calls to the least squares solver of ht (u, v) is O log ?1? log ?
+ log ??1?2 .
6
3.2.2 Phase II: ?nal normalization
In order to satisfy the CCA constraints, we perform a last normalization
q
q
?
? ? u T / u?
?
u
,
v
?
v
/
vT? ?yy vT .
u
xx
T
T
T
(12)
? ) as our ?nal approximate solution to (1). We show that this step does not cause
And we output (?
u, v
much loss in the alignments, as stated below (see it proof in Appendix G).
Theorem 5 (Convergence of Algorithm 3, Phase II). Let Phase I of Algorithm 3 outputs (uT , vT )
? ) to (1) such that
that satisfy (11). Then after (12), we obtain an approximate
u, v
solution (?
? ? ?xy v
? ? ?1 (1?2?).
? = 1, min (?
?=v
? ? ?yy v
? ? ?xx u
v? ?yy v? )2 ? 1??, and u
u? ?xx u? )2 , (?
u
3.2.3 Time complexity
We have shown in Theorem 4 that Phase I only approximately solves a small number of instances
of (9). The normalization steps (10) require computing the projections of the traning set which are
reused for computing batch gradients of (9). The ?nal normalization (12) is done only once and
costs O(dN ). Therefore, the time complexity of our algorithm mainly comes from solving the least
squares problems (9) using SGD methods in a blackbox fashion. And the time complexity for SGD
methods depends on the condition number of (9). Denote
#
#
" 12
" 21
??xx ??xy
?I
?T
?xx
?xx
Q? =
=
.
(13)
1
1
?T? ?I
??yy
???
2
2
xy
?yy
?yy
It is clear that
?max (Q? ) ? (? + ?1 ) ? max (?max (?xx ), ?max (?yy )) ,
?min (Q? ) ? (? ? ?1 ) ? min (?min (?xx ), ?min (?yy )) .
We have shown in the proof of Theorem 4 that
?+?1
???1
?
9
?
?
?
9
c1 ?
Lemma 10, Appendix F.2), and thus the condtion number for AGD is
max(?max (?xx ), ?max (?yy ))
min(?min (?xx ), ?min (?yy )) .
throughout Algorithm 3 (cf.
?max (Q? )
?min (Q? )
?
9/c1
?? ,
?1 ??2 ?
where
?
? :=
For SVRG/ASVRG, the relevant condition number depends on the
gradient Lipschitz constant of individual components. We show in Appendix H (Lemma 12) that the
maxi max(kxi k2 , kyi k2 )
1
relevant condition number is at most ?9/c
. An interesting
?
?
,
where
?
?
:=
??
min(?
1
2
min (?xx ), ?min (?yy ))
issue for SVRG/ASVRG is that, depending on the value of ?, the independent components hit (u, v)
may be nonconvex. If ? ? 1, each component is still guaranteed to by convex; otherwise, some
PN
components might be non-convex, with the overall average N1 i=1 hit being convex. In the later
case, we use the modi?ed analysis of SVRG [16, Appendix B] for its time complexity. We use warmstart in SI as in ALS, and the initial suboptimality for each subproblem can be bounded similarly.
?
The total time complexities of our SI meta-algorithm are given in Table 1. Note that ?
? (or ?
?? )
1
are
multiplied
together,
giving
the
effective
condition
number.
When
using
SVRG
as
and ?1 ??
2
2 1
1
?
the least squares solver, we obtain the total
? ?1 ??2 ) ? log ?
+ ?
time complexity of O d(N
2 1
1
2
? d(N + (?
)
)
?
log
otherwise.
When usif all components are convex, and O
? ?1 ??
?
2
? ? q
1
? d N ?
? ?1 ??
? log2 ?1
if all components are convex, and
ing ASVRG, we have O
2
q
3?
1
? hides poly-logarithmic dependences on 1
? dN 4 ?
? ?1 ??
otherwise. Here O(?)
? log2 ?1
O
?
?
2
1
. It is remarkable that the SI meta-algorithm is able to separate the dependence of dataset size
and ?
N from other parameters in the time complexities.
Parallel work In a parallel work [6], the authors independently proposed a similar ALS algorithm7 ,
?
and they solve the least squares problems using
AGD. The
time complexity
of their algorithm for ex2
? dN ?? 2?1 2 ? log 1 , which has linear dependence
tracting the ?rst
canonical
correlation
is
O
?
?1 ??2
?21
1
(so
their
algorithm
is
linearly
convergent,
but
our
complexity for ALS+AGD has
on ?2 ??
2 log
?
1
2
quadratic dependence on this factor), but typically worse dependence on N and ?? (see remarks in
Section 3.1.1). Moreover, our SI algorithm tends to signi?cantly
outperform ALS theoretically and
1
empirically. It is future work to remove extra log ? dependence in our analysis.
7
Our arxiv preprint for the ALS meta-algorithm was posted before their paper got accepted by ICML 2016.
7
?x = ?y = 10?5
?? = 53340, ? = 5.345
?x = ?y = 10?4
?? = 5335, ? = 4.924
0
Suboptimality
Mediamill
10
CCALin
SI-AVR
-5
10-5
10
SI-AVR
ALS-AVR
ALS-VR
10-4
AppGrad
-5
10
SI-AVR
SI-VR
ALS-VR
ALS-AVR
10-10
10-10
-10
10
ALS-VR
ALS-VR
100
SI-VR
10-15
100
200
300
400
500
600
0
100
200
300
SI-VR
10-15
400
500
0
100
200
-15
300
400
500
600
0
?? = 34070, ? = 10.58
100
200
300
400
500
600
?? = 3416, ? = 9.082
0
S-AppGrad
10
100CCALin
AppGrad
ALS-VR
AppGrad
AppGrad
10-1
ALS-AVR
10
0
600
?? = 332800, ? = 11.10
10
CCALin
CCALin
SI-VR
ALS-AVR
0
JW11
S-AppGrad
S-AppGrad
SI-AVR
-2
10
-6
Suboptimality
AppGrad
CCALin
?? = 2699000, ? = 11.22
CCALin
S-AppGrad
10-2
S-AppGrad
ALS-VR
SI-AVR
-2
ALS-AVR
10-5
10
S-AppGrad
SI-VR
10
SI-AVR
SI-AVR
ALS-VR
10-4
SI-VR
-3
ALS-AVR
-5
10
CCALin
AppGrad
ALS-AVR
SI-AVR
-10
10
ALS-VR
ALS-AVR
10-4
10-6
0
100
200
300
400
500
600
100
SI-VR
10-10
0
?? = 2235000, ? = 12.82
100
200
300
400
500
600
100
100
200
300
400
500
?? = 22350, ? = 12.30
100
200
300
400
500
600
?? = 2236, ? = 9.874
0
10
ALS-VR
AppGrad
S-AppGrad
AppGrad
AppGrad
S-AppGrad
CCALin
S-AppGrad
ALS-AVR
0
600
100
CCALin
SI-VR
-15
10
0
?? = 223500, ? = 12.75
AppGrad
Suboptimality
0
10
S-AppGrad
AppGrad
AppGrad
10
MNIST
?x = ?y = 10?2
?? = 54.34, ? = 2.548
0
10
S-AppGrad
0
10CCALin
?x = ?y = 10?3
?? = 534.4, ? = 4.256
CCALin
ALS-VR
10-2
CCALin
S-AppGrad
ALS-AVR
ALS-VR
-5
-5
10
-5
10
ALS-AVR
10
ALS-AVR
SI-AVR
10-4
ALS-VR
-10
10-10
-10
10
10
SI-VR
SI-VR
10-6
100
200
300
400
# Passes
500
600
10-15
SI-AVR
10-15
0
100
200
300
400
500
600
# Passes
0
100
SI-AVR
SI-VR
SI-VR
SI-AVR
0
200
300
400
# Passes
500
600
10-15
0
100
200
300
400
500
600
# Passes
Figure 1: Comparison of suboptimality vs. # passes for
For each dataset and
different algorithms.
?21
(?xx ) ?max (?yy )
,
regularization parameters (?x , ?y ), we give ?? = max ??max
?min (?yy ) and ? = ?2 ??2 .
min (?xx )
1
2
Extension to multi-dimensional projections To extend our algorithms to L-dimensional projections, we can extract the dimensions sequentially and remove the explained correlation from ?xy
each time we extract a new dimension [18]. For the ALS meta-algorithm, a cleaner approach is
to extract the L dimensions simultaneously using (inexact) orthogonal iterations [8], in which case
the subproblems become multi-dimensional regressions and our normalization steps are of the form
? t )? 12 (the same normalization is used by [3, 4]). Such normalization involves
? t (U
? ? ?xx U
Ut ? U
t
the eigenvalue decomposition of a L ? L matrix and can be solved exactly as we typically look
for low dimensional projections. Our analysis for L = 1 can be extended to this scenario and the
convergence rate of ALS will depend on the gap between ?L and ?L+1 .
4
Experiments
We demonstrate the proposed algorithms, namely ALS-VR, ALS-AVR, SI-VR, and SI-AVR, abbreviated as ?meta-algorithm ? least squares solver? (VR for SVRG, and AVR for ASVRG) on three
real-world datasets: Mediamill [19] (N = 3 ? 104 ), JW11 [20] (N = 3 ? 104 ), and MNIST [21]
(N = 6 ? 104 ). We compare our algorithms with batch AppGrad and its stochastic version
s-AppGrad [3], as well as the CCALin algorithm in parallel work [6]. For each algorithm, we
compare the canonical correlation estimated by the iterates at different number of passes over the
data with that of the exact solution by SVD. For each dataset, we vary the regularization parameters
?x = ?y over {10?5 , 10?4 , 10?3 , 10?2 } to vary the least squares condition numbers, and larger
regularization leads to better conditioning. We plot the suboptimality in objective vs. # passes for
each algorithm in Figure 1. Experimental details (e.g. SVRG parameters) are given in Appendix I.
We make the following observations from the results. First, the proposed stochastic algorithms signi?cantly outperform batch gradient based methods AppGrad/CCALin. This is because the least
squares condition numbers for these datasets are large, and SVRG enable us to decouple dependences on the dataset size N and the condition number ? in the time complexity. Second, SI-VR
converges faster than ALS-VR as it further decouples the dependence on N and the singular value gap
of T. Third, inexact normalizations keep the s-AppGrad algorithm from converging to an accurate
solution. Finally, ASVRG improves over SVRG when the the condition number is large.
Acknowledgments
Research partially supported by NSF BIGDATA grant 1546500.
8
References
[1] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321?377, 1936.
[2] H. D. Vinod. Canonical ridge and econometrics of joint production. J. Econometrics, 1976.
[3] Z. Ma, Y. Lu, and D. Foster. Finding linear structure in large datasets with scalable canonical
correlation analysis. In ICML, 2015.
[4] W. Wang, R. Arora, N. Srebro, and K. Livescu. Stochastic optimization for deep CCA via
nonlinear orthogonal iterations. In ALLERTON, 2015.
[5] B. Xie, Y. Liang, and L. Song. Scale up nonlinear component analysis with doubly stochastic
gradients. In NIPS, 2015.
[6] R. Ge, C. Jin, S. Kakade, P. Netrapalli, and A. Sidford. Ef?cient algorithms for large-scale
generalized eigenvector computation and canonical correlation analysis. arXiv, April 13 2016.
[7] G. Golub and H. Zha. Linear Algebra for Signal Processing, chapter The Canonical Correlations of Matrix Pairs and their Numerical Computation, pages 27?49. 1995.
[8] G. Golub and C. van Loan. Matrix Computations. third edition, 1996.
[9] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In NIPS, 2013.
[10] Y. Lu and D. Foster. Large scale canonical correlation analysis with iterative least squares. In
NIPS, 2014.
[11] M. Schmidt, N. Le Roux, and F. Bach. Minimizing ?nite sums with the stochastic average
gradient. Technical Report HAL 00860051, ?cole Normale Sup?rieure, 2013.
[12] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized
loss minimization. Journal of Machine Learning Research, 2013.
[13] R. Frostig, R. Ge, S. Kakade, and A. Sidford. Un-regularizing: Approximate proximal point
and faster stochastic algorithms for empirical risk minimization. In ICML, 2015.
[14] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for ?rst-order optimization. In NIPS,
2015.
[15] Y. Nesterov. Introductory Lectures on Convex Optimization. A Basic Course. Springer, 2004.
[16] D. Garber and E. Hazan. Fast and simple PCA via convex optimization. arXiv, 2015.
[17] C. Jin, S. Kakade, C. Musco, P. Netrapalli, and A. Sidford. Robust shift-and-invert preconditioning: Faster and more sample ef?cient algorithms for eigenvector computation. 2015.
[18] D. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications
to sparse principal components and canonical correlation analysis. Biostatistics, 2009.
[19] C. Snoek, M. Worring, J. van Gemert, J. Geusebroek, and A. Smeulders. The challenge problem for automated detection of 101 semantic concepts in multimedia. In MULTIMEDIA, 2006.
[20] J. Westbury. X-Ray Microbeam Speech Production Database User?s Handbook, 1994.
[21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proc. IEEE, 86(11):2278?2324, 1998.
[22] M. Warmuth and D. Kuzmin. Randomized online PCA algorithms with regret bounds that are
logarithmic in the dimension. Journal of Machine Learning Research, 2008.
[23] R. Arora, A. Cotter, K. Livescu, and N. Srebro. Stochastic optimization for PCA and PLS. In
ALLERTON, 2012.
[24] A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental PCA. In
NIPS, 2013.
[25] O. Shamir. A stochastic PCA and SVD algorithm with an exponential convergence rate. In
ICML, 2015.
[26] F. Yger, M. Berar, G. Gasso, and A. Rakotomamonjy. Adaptive canonical correlation analysis
based on matrix manifolds. In ICML, 2012.
9
| 6459 |@word version:2 nd:3 reused:1 decomposition:4 covariance:5 concise:1 sgd:8 reduction:3 initial:2 document:1 si:35 dx:4 written:1 readily:1 numerical:1 chicago:2 enables:1 remove:2 plot:1 update:3 v:2 instantiate:2 warmuth:1 beginning:1 provides:3 iterates:7 allerton:2 zhang:2 dn:11 along:1 become:1 consists:2 doubly:1 dan:1 introductory:1 ex2:1 ray:1 introduce:1 yger:1 theoretically:1 snoek:1 frequently:1 blackbox:1 multi:2 inspired:3 globally:6 solver:6 provided:2 xx:57 notation:2 spain:1 bounded:3 estimating:1 moreover:1 biostatistics:1 eigenvector:5 developed:1 finding:1 guarantee:3 cial:1 sag:1 exactly:2 decouples:1 k2:5 hit:3 biometrika:1 unit:3 grant:1 omit:1 yn:2 positive:2 before:2 local:2 tends:1 limit:1 approximately:5 might:1 initialization:6 unique:2 acknowledgment:1 lecun:1 ccalin:15 regret:1 implement:2 nite:4 sdca:1 area:1 nnz:1 empirical:1 universal:1 got:1 revealing:1 projection:9 convenient:1 pre:1 close:2 risk:1 optimize:1 equivalent:2 center:1 straightforward:2 starting:1 independently:1 convex:8 musco:1 simplicity:1 roux:1 m2:2 rule:1 alignmax:1 coordinate:1 updated:1 target:1 shamir:1 user:1 exact:2 us:2 livescu:2 origin:1 satisfying:2 recognition:1 econometrics:2 database:1 ft:9 subproblem:1 preprint:1 solved:5 wang:1 technological:1 mentioned:1 jialei:2 complexity:25 nesterov:1 depend:1 solving:5 rewrite:1 algebra:1 predictive:1 upon:1 completely:1 preconditioning:8 joint:1 chapter:1 fast:3 effective:1 shalev:1 whose:1 garber:1 larger:1 solve:7 otherwise:3 transform:2 online:1 sequence:2 advantage:2 eigenvalue:5 propose:1 relevant:2 loop:7 combining:1 mediamill:2 rst:7 convergence:21 optimum:3 incremental:1 converges:5 depending:2 develop:1 rescale:1 progress:2 solves:2 netrapalli:2 signi:3 come:1 involves:1 differ:1 owing:1 stochastic:26 centered:1 enable:1 require:2 fix:1 extension:2 normal:1 minu:2 vary:2 smallest:1 proc:1 quote:1 cole:1 largest:3 troduce:1 cotter:1 minimization:2 normale:1 rdx:7 avoid:1 pn:2 notational:1 rank:2 check:2 dgarber:1 mainly:1 contrast:1 rigorous:1 wang1:1 entire:1 typically:3 hidden:1 relation:1 sketched:1 issue:1 overall:1 dual:1 denoted:1 art:3 special:1 initialize:2 once:1 sampling:1 look:1 icml:5 future:1 report:1 modi:1 simultaneously:1 individual:1 phase:15 n1:5 maintain:1 attempt:1 detection:1 satis:1 alignment:7 golub:2 kvk:1 scienti:1 behind:1 accurate:2 necessary:1 xy:21 orthogonal:2 instance:1 ar:5 sidford:3 cost:1 rakotomamonjy:1 weiran:1 johnson:1 dependency:1 varies:1 kxi:5 traning:1 gd:3 adaptively:1 proximal:1 randomized:1 cantly:3 together:2 quickly:1 choose:1 possibly:1 worse:1 leading:2 account:1 de:4 sec:2 satisfy:3 explicitly:1 depends:3 later:1 view:5 closed:2 hazan:1 sup:1 start:2 zha:1 parallel:3 contribution:1 smeulders:1 square:31 accuracy:3 variance:5 maximized:1 none:1 lu:2 reach:1 ed:1 inexact:6 nonetheless:1 associated:1 proof:6 dataset:5 ut:25 improves:1 ubiquitous:1 xie:1 improved:1 april:1 formulation:2 done:1 shrink:1 strongly:2 furthermore:1 correlation:14 crudely:2 sketch:2 until:1 nonlinear:2 minibatch:1 hal:1 normalized:1 true:1 concept:1 former:1 regularization:9 alternating:8 nonzero:4 semantic:1 coincides:1 suboptimality:7 generalized:1 ridge:3 demonstrate:2 ef:7 recently:2 superior:1 common:1 witten:1 empirically:1 conditioning:2 extend:2 m1:2 rdy:6 rd:3 similarly:1 frostig:1 language:1 longer:1 v0:3 gt:5 align:1 dominant:2 recent:1 hide:3 scenario:1 rieure:1 certain:1 nonconvex:2 meta:12 inequality:1 vt:35 yi:7 care:1 signal:1 u0:3 ii:4 multiple:1 desirable:1 afterwards:1 nonzeros:1 harchaoui:1 smooth:1 ing:1 faster:3 technical:1 cross:1 bach:1 lin:1 equally:1 paired:1 a1:6 converging:1 variant:1 basic:2 regression:2 scalable:1 essentially:1 expectation:1 arxiv:3 iteration:15 normalization:18 invert:9 achieved:2 c1:2 proposal:1 separately:1 singular:14 source:1 extra:2 operate:1 unlike:1 pass:7 ascent:1 call:1 ciently:1 near:1 bengio:1 enough:1 easy:1 vinod:1 automated:1 variate:1 hastie:1 suboptimal:2 inner:3 idea:1 haffner:1 br:5 shift:9 motivated:1 pca:7 expression:1 accelerating:1 song:1 speech:1 cause:1 remark:3 deep:1 clear:1 eigenvectors:3 cleaner:1 locally:1 reduced:2 outperform:2 canonical:14 nsf:1 sign:1 estimated:2 per:1 yy:47 tibshirani:1 dasgupta:1 key:3 achieving:2 kyi:3 ce:1 kuk:4 nal:3 ht:2 sum:3 run:3 throughout:1 dy:3 summarizes:1 appendix:11 bound:4 cca:20 guaranteed:3 convergent:8 quadratic:1 constraint:3 dences:1 nathan:1 min:38 ned:1 speedup:2 kakade:3 explained:1 gradually:1 taken:2 vq:1 remains:1 discus:1 abbreviated:1 ge:2 gemert:1 end:3 operation:2 multiplied:2 apply:3 observe:3 balsubramani:1 away:1 stepsize:1 hotelling:1 batch:6 schmidt:1 eigen:1 wang2:1 ktk:1 original:3 microbeam:1 top:3 running:2 ensure:1 cf:1 log2:4 giving:1 concatenated:1 especially:2 objective:8 dependence:8 gradient:17 distance:1 separate:1 outer:3 manifold:1 berar:1 assuming:1 avr:27 length:3 besides:1 minimizing:3 equivalently:1 liang:1 subproblems:1 stated:1 contributed:1 perform:2 upper:2 observation:5 datasets:6 markov:1 descent:6 jin:2 extended:1 variability:1 ever:1 worring:1 y1:2 locate:1 ttic:1 pair:5 namely:2 bene:1 connection:1 barcelona:1 nip:6 able:1 proceeds:1 below:3 regime:1 sparsity:3 challenge:1 geusebroek:1 max:21 memory:2 power:11 natural:2 warm:2 regularized:3 xx2:3 advanced:1 improve:1 ne:2 arora:2 gasso:1 auto:1 extract:4 nati:1 multiplication:4 relative:1 catalyst:1 loss:2 lecture:1 freund:1 sublinear:1 suf:4 interesting:1 srebro:2 remarkable:1 foster:2 production:2 course:1 penalized:1 supported:1 last:1 svrg:24 uchicago:1 institute:1 sparse:3 van:2 dimension:4 xn:2 world:1 author:2 adaptive:1 agd:9 approximate:8 implicitly:1 keep:1 global:3 sequentially:1 mairal:1 b1:6 handbook:1 assumed:1 unnecessary:1 consuming:1 xi:6 shwartz:1 un:1 iterative:2 table:3 robust:1 warmstart:1 bottou:1 poly:3 posted:1 main:2 linearly:2 motivation:1 noise:1 edition:1 x1:2 kuzmin:1 cient:8 fashion:1 vr:29 shrinking:1 ciency:1 exponential:1 toyota:1 third:2 theorem:10 maxi:4 x:1 ments:1 exists:1 mnist:2 effectively:1 gap:8 logarithmic:4 pls:1 partially:1 springer:1 nested:1 ma:1 acceleration:2 towards:1 lipschitz:1 change:3 loan:1 except:1 decouple:3 lemma:4 principal:1 total:5 multimedia:2 accepted:1 experimental:2 svd:3 e:1 meaningful:1 accelerated:4 bigdata:1 regularizing:1 phenomenon:1 |
6,035 | 646 | On the Use of Projection Pursuit Constraints for
Training Neural Networks
Nathan Illtl'ator'"
Comput.er Science Department
Tel-Aviv Universit.y
Ramat.-A viv, 69978 ISRAEL
and
Inst.itute for Brain and Neural Systems,
Brown University
nin~math,tau.ac.il
Abstract
\Ve present a novel classifica t.ioll and regression met.hod that combines exploratory projection pursuit. (unsupervised traiuing) with projection pursuit. regression (supervised t.raining), t.o yield a. nev,,' family of
cost./complexity penalLy terms . Some improved generalization properties
are demonstrat.ed on real \vorld problems.
1
Introduction
Parameter estimat.ion becomes difficult. in high-dimensional spaces due t.o the increasing sparseness of t.he dat.a. Therefore. when a low dimensional representation
is embedded in t.he da.t.a. dimensionality l'eJuction methods become useful. One
such met.hod - projection pursuit. regression (Friedman and St.uet.zle, 1981) (PPR)
is capable of performing dimensionality reduct.ion by composit.ion, namely, it constructs an approximat.ion to the desired response function using a composition of
lower dimensional smooth functions, These functions depend on low dimensional
projections t.hrough t.he data .
? Research was support.ed by the Nat.ional Science Foundat.ion. the Army Research Office, and the Office of Naval Researclr .
3
4
Intrator
When the dimensionality of the problem is in the thousands, even projection pursuit methods are almost alwa.ys over-parametrized, t.herefore, additional smoothing
is needed for low variance estimation. Explol'atory Projection Pursuit (Friedman
and Thkey, 1974; Friedman, 1987) (EPP) may be useful for t.hat. It searches in a
high dimensional space for structure in the form of (semi) linear projections with
constraints characterized by a projection index. The projection index may be considered as a universal prior for a large class of problems, or may be tailored t.o a
specific problem based on prior knowledge.
In this paper, the general for111 of exploratory projection pursuit is formulated to be
an additional constraint for projection pUl'suit regression. In particular, a hybrid
combination of supervised and unsupervised artificial neural network (ANN) is described as a special case. In addition, a specific project.ion index that is particularly
useful for classification (Int.rator, 1990; Intrator and Cooper, 1992) is introduced in
this context. A more detailed discussion appears in Intrator (1993).
2
Brief Description of Projection Pursuit Regression
Let (X, Y) be a pair of random variables, X E R d , and Y E R. The problem is to
approximate the d dimensiona.l surfa('e
I(x) = E[Y'IX = x}
from n observations (Xl, YI), ... , (Xu, Yn).
PPR tries t.o approximate a funct.ion
are constant. along lines)
1 by
1(:1') ~
a sum of ridge functions (functions that
L gj(af x).
j=l
The fit.t.ing procedure alt.ernat.es between a.n estimation of a direction a and an
estimat.ioll of a smoot.h funct.ion g. such that at. iterat.ion j, t.he square average of
t.he resid uals
l'ij(xd = 1'ij-l - 9j((IJ xd
is minimized. This process is init.ialized by setting 1'jO = !Ii. Usually, the initial
values of aj a.re t.aken to be the first few principal component.s of the data.
Estimation of the ridge functions call be achieved by various nonparamet.ric smoothing techniques such as locally linear functions (Friedman and Stuetzle, 1981),
k-nearest neighbors (Hall. 1989b), splines or variable degree polynomials. The
smoot.hness const.raint. imposed on !1, implies t.hat. t.he actual projection pursuit
is achieved by minimizing at. it.erat.ioJl j. t.lte sum
II
i= 1
for some smoothness measure C.
Although PPR cOllverg('s t.o the desired response function (Jones, 1987), the use
of non-paramet.ric function estimat.ion is likely to lead to ovel'fitt.ing. Recent results (Hornik, 1991) suggest. that a feed forward net.work archit.ecture with a single
On the Use of Projection Pursuit Constraints for Training Neural Networks
hidden layer and a rat.her general fixed activat.ion function is a universal approximator. Therefore, the use of a non-parametric single ridge function estimation can
be avoided. It is thus appropriate to concentrate on the est.imation of good projections. In the next section we present a general framework of PPR architecture,
and in sect.ion 4 we restrict it. t.o a feed-forward architecture with sigmoidal hidden
units.
3
EstiInating The Projections Using Exploratory
Projection Pursuit
Explorat.ory projection pursuit ?is based on seeking interesting projections of high
dimensional data points (Krllskal, 1969; Switzer, 1970; Kruskal, 1972; Friedman
and Tukey, 1974; Friedman, 1987; Jones and Sibson, 1987; Hall, 1988; Huber, 1985,
for review). The notion of interesting projections is motivated by an observation
t.hat for most. high-dimensional data clouds, most low-dimensional projections are
approximat.ely normal (Oiaconis alld F!'('edlllan, 1984). This finding suggests that
the important information in the data is conveyed in t.hose direct.ions whose single
dimensional project.ed dist.ribution is far from Gaussian. Variolls projection indices
(measures for t.he goodrwss of a. projl-'ction) differ on the assumptions about the
nature of deviation from norl1lality, (Iud ill their comput.ational efficiency. They can
be considered as different priOl's mot.ivat.ed by specific assumptions on t.he underlying
model.
To partially decouple the search for a projection vectol' from the search for a nonparametric ridge function, we propose to add a penalty term, which is based on
a pl'Oject.ion index, t.o t.he energy minimizat.ion associated wit.h the estimation of
the ridge functions and t.he projections. Specifically, let p( a) be a projection index
which is minimized for project.ions wit.h a certain deviation fl'0111 normality; At the
j'th iterat.ion, we minimize the sum
L 1}( .r;) +
(,'(gj)
+ p(aj).
i
When a concurrent minimizat.ion ovet' several project.ions/functions is practical, we
get a penalty t.erm of t.he form
B(j) = L[C(gj)
+ p(aj )].
j
Since C and p may not be linear, t.he more general measure t.hat does not assume a
step",Tise approach, but. instead seeks I projections and ridge functions concurrently,
is given by
B(f) = C(9J," ?,gd + p(a.J, .. . ,ad,
In practice, p depends implicit.ly 011 t.he t.raining dat.a, (t.he empirical density) and
is therefore replaced by its empirical measure ii.
3.1
Some Possible Measures
Some applicable projection indices are disc.ussed in (Huber, 1985; Jones and Sibson, 1987; Friedman, 1987; Hall, 1989a; Intrator, 1990). Probably, a.ll the possible
5
6
Intrator
measures should emphasize some form of deviation from normality but the specific type may depend on the problem at hand. For example, a measure based
on the Karhunen Loeve expansion O"Iougeot et al., 1991) may be useful for image
compression with autoassociative net.works, since in this case one is int.erested in
minimizing the L2 norm of tlH' dist.ance between t.he reconst.ructed image and the
original one, and under mild condit.ions, t.he Karhunen Loeve expansion gives the
optimal solution.
A different type of prior knowledge is required for classificat.ion problems. The
underlying a'5sumption then is that the data is clustered (when projecting in the
right direct.ions) and that t.he classification may be achieved by some (nonlinear)
ma.pping of these clustel?s. In such a case, the projection index should emphasize
multi-modality as a specific deviation from normality. A projection index that emphasizes multimodalities in the projected distribution (without relying on the class
la.bels) has recently been int.roduced (Intrator, 1990) and implemented efficiently using a variant of a biologically motivated unsupervised network (Intrat.or and Cooper,
1992) . Its int.egration into a back-propagat.ion classifier will be discussed below .
3.2
Adding EPP constraints to baek-propagatioll network
One way of adding SOllie prior knowledge int 0 the archi t.ecLme is by 111lllll1llZmg
the effective number of parameters llsing weight. sharing, ill which a single weight
is shared among many connections in the network (\\'aibel et. al., 1989; Le Cun
et aI., }989). An ext.f'nsion of t.his idea is the "soft. \',?eight. sharing" which favors
irregularities in the weight distribution in the form of mult.imodality (Nowlan and
Hinton, 1992). This penalty improved generalization results obtained by weight
elimination penalt.y. Bot.h t.hese wet.hods make an explicit. assumption about the
structure of t.he weight. space, but. wit.h 110 regarJ to the structure of the illput space.
As described in the context of project.ion pursuit. regression. a penalt.y term may
be added t.o the energy funct.ional minimized by error back propagation, for the
purpose of mea<;uring direct.ly t.he goodness of t.he projections sOllght by the network.
Since our main int.erest. is in reducing ovedHt.ing fOI' high dimensional pl'Oblems, our
underlying assumpt.ion is t.hat. t.ile slll-faCf.' fUllct.ion to be estirnat.ed can be faithfully
represented using a low dimensiollal composition of sigmoidal functions, namely,
using a back-propagation net.work in which t.he number of hidden units is much
smaller t.han the number of input unit.s. Therefore, t.he penalty term may be added
only to the hidden layer. The synapt.ic modification equat.ions of the hidden units'
weights become
OWij
fJt
ot(w, .1')
-c [
aWij
0P(Wl, .... wn)
+-----OU'ij
+(Contrihul,ion of cost/complexity t.erms)].
An appl'Oach of t.his type has lWl'1I used in ima.ge compl'cssion, wit.h a penalty
aimed at minimizing tIl<' ent.ropy of the projected distribution (Bichsel and Seitz,
1989). This penalt.y eel'tainly measures deviat.ion from normality, since entropy is
maximized for a Gaussian distribution.
On the Use of Projection Pursuit Constraints for Training Neural Networks
4
Projection Index for Classification: The Unsupervised
BCM Neuron
Intrator (1990) has recently shown that a variant of the Bienenstock, Cooper and
Munro neuron (Bienenstock et al., 1982) performs exploratory projection pursuit
using a projection index that measures multi-modality. This neuron version allows
theoretical analysis of some visual deprivation experiments (lntrat.or and Cooper,
1992), and is in agreement. with the vast experimental result.s on visual cortical
plasticity (Clothiaux et al., 1991). A network implementation which can find several
projections in parallel while ret.aining its computational efficiency, was found to be
applicable for extracting features from very high dimensional vector spaces (Intrator
and Gold, 1993; Int.rator et al., 1991; Intrator, 1992)
The activity of neuron k in the network is Ck = Li
activity and threshold of the k'th neuron is given by
C/.: =
(1(Ck - I I
k
8
-~ m
LCj),
XiWik
+ WOk.
The inhibited
= E[''l]
. cj,: .
j'f;/.:
The threshold e~~l is the point. at. which the modificat.ion function </J changes sign
(see Intrator and Cooper, 1992 for further det.ails) . The function </J is given by
</J(c, 8/11}
= c(c - 8 m
}.
The risk (projection index) for a single neuron is given by
R( Wk)
= -{ ~ E[c2] -
~ E2(c~]}.
The total risk is the sum of each local risk. The negative gradient. of the risk that
leads to the synaptic modification equations is given by
at = E[A..( OWjj
IJ)
Cj, 8
- m
j} (1 '( Cj
~ )l!j
-
~ 8-- III
k) (1 '( Ck
-) Xi ] .
11 ~
L <pA.( Cl',
k'f;j
This last equa.tion is an a.dditional pellalt.y to t.he energy minimizat.ion of the supervised net.work . Not.e that there is an int.eract.ion between adjacent neurons in the
hidden layer. In practice, t.he st.ochast.ic version of t.he different.ial equat.ion can be
used as the learning ntle.
5
Applications
Vve have applied t.his hybrid classification met.hod to various speech and image
recognition problems in high dimensional space. In one speech application we used
voiceless stop consonant.s extracted from the TIMIT database as training tokens
(Intrator and Tajchman, 1991). A det.ailed biologically motivated speech representation was produced by Lyoll's cochlear model (Lyon, 1982; Slaney, 1988). This
representation produced 5040 dimensions (84 channels x 60 t.ime slices) . In addition t.o an init.ial voiceless st.op, each t.oken cont.ained a final vowel from the set
[aa, ao, er, iy]. Classificat.ion of t.he voiceless stop consonant.s using a test set that
included 7 vowels [uh, ih, eh, ae, ah, uw, ow] produced an average error of 18.8%
7
8
Intrator
while on the same task classification using back-propagation network produced an
average error of 20.9% (a significant difference, P < .0013). Addit.ional experiments
on vowel tokens appear in Tajchman and Intrator (1992).
Another application is in the area of face l?ecognit.ion from gray level pixels (Intrator
et al., 1992) . After aligning and normalizing the images, the input was set to 37
x 62 pixels (total of 2294 dimensions). The recognition performance was tested on
a subset of t.he MIT Media Lab database of face images made available by Turk
and Pent.land (1991) which cont.ained 27 face images of each of 16 different persons.
The images were taken under val'ying illumiuation and camera location . Of the 27
images available, 17 randomly chosen ones served for tl'aining and the remaining
10 were used for test.iug , Usiug all ensemble average of hybrid networks (Lincoln
and Skrzypek, 1990; Pearlmut.t.er and Rosenfeld, 1991; Perrone and Cooper, 1992)
we obtained an errOl' rat.e of 0.62% as opposed to 1.2% using a similar ensemble of
back-prop networks. A single back-prop network achieves an error between 2.5% to
6% on this data . The experiments were done using 8 hidden units,
6
SUl11l11ary
A penalty that allows the incol'porat.ioll of additional prior information on the underlying model was presC'llt.ed. This prior was introduced in t.he context of projection
pursuit regression, classificat.ioll, aud in the context of back-propagation network.
It achieves pa.rt.ial decoupling of est.illIat.ion of t.he ridge fuuctions (in PPR) or the
regression function in back-propagat.ion net. from t.he est.imatioll of t.he projections,
Thus it is potentially useful in reducing problems associat.ed wit.h overfitting which
are more pronounced in high dimensional dat.a.
Some possible projection indices were discllssed and a specific projection index that
is particula.rly useful for classificat.ion was pt'esented in this ('on text. This measure
that emphasizes multi-modality in the projected distribut.ioll, was found useful in
several very high dimensional problems .
6.1
Ackllowledglueuts
I wish to t.hank Leon Cooper, Stu Gel1lan anJ Michael Pel'l'one for many fruitful
conversations and to t.he referee for helpful comments. The speech experiments were
performed using the comput.at.ional facilit.ies of the Cognitive Science Department
at Browll University. Research was supported by the National Science Foundation,
t.he Army Research Office, a.nd t.he Office of Naval Research.
References
Bichsel, M. and Seit.z, P. (1989). Minimuln class ent.ropy: A maximum informat.ion approa.ch t.o layered netowrks. ,\"cllmi Ndworks, 2:133-141.
Bienenstock, E. L .. Cooper, L . N., and ~'ltlHro, P. W. (198'2) . Theory for t.he development
of neuron select.ivit.~,: orientat.ioll s~wcificit.y allel binocular int.eract.ion in visual cortex.
Journal Nctll'Oscicllct'. 1 ::32 - 48.
On the Use of Projection Pursuit Constraints for Training Neural Networks
Clothiaux, E. E., Cooper, L. N., and Bear, M. F. (1991). Synaptic plasticity in visual
cortex: Comparison of theory with experiment. Joumal of Neurophysiology, 66:17851804.
Diacollis, P. and Freedman, D. (1984). Asymptotics of graphical projection pursuit. Annals
of Statistics, 12:793-815.
.
Friedman, J. H. {1987}. Exploratory projection pursuit. Journal of the American Statistical
Association, 82:249-266.
Friedman, J. H. and Stuetzle, W. (1981). Projection pursuit regression. Journal of the
Ame"ican Statistical Association, 76:817-823.
Friedman, J. H. and Tukey, J. W. (1974). A projection pursuit algorithm for exploratory
data analysis. IEEE Tmnsactions on Compute,-s, C(23):881-889.
Hall, P. (1988). Estimating t.he direction in which data set is most interesting. PJ'Obab.
Theo,'Y Rei. Fields, 80:51-78.
Hall, P. (1989a). On polynomial-based projection indices for exploratory projection pursuit.. The Annals of Statistics, 17:589-605.
Hall, P. (1989b). On projection pursuit. regression. The Annals of Statistics, 17:573-588.
Hornik, K. (1991). Approximat.ioll capabilities of lUult.ilayer feedforward networks. Neural
Netwo,'ks,4:251-257 .
Huber, P. J. (1985).
13:435-475.
Project.ion pursuit.. (wit.h discussion).
The Annals of Statistics,
Int.l'ator, N. (1990). Featllre extract.ion llsing an ullsupervised neural network. In Touretzky, D. S .. EHman, J. L., Sejnowski. T. J., and Hint.on, G. E .. editors, Proceedings of
the 1990 Connectionist Modds Summer Sclwol, pages 310- :118. Morgan Kaufmann,
San Mateo, CA.
Intrator, N. (1992). Feat.ure extraction lIsing an unsupervised nemal network.
COml)utation,4:98-1tli.
Neural
Int.rator, N. (1993). Combining exploratory project.ion pursuit and projection pursuit
regression with application to neural netwol'ks. Neural Computation. In press.
Intrator, N. and COOPCl', L. N. (1992). Object.ive fUllction formulation of the BCM theory of visual cortical plast.icity: Stat.ist.ical connect.ions, stability conditions. Neural
NetwOJ?ks,5:3-17.
Intrator, N. and Gold, J. I. (1993). Three-dimensional object recognition of gray level
images: The usefulness of dist.inguishing features. New'al Computation. In press.
Intrator, N., Gold, J.I.. Biilthoff, H. Hoo and Edelman, S. (1991). Three-dimensional object
recognition using an unsupervised neural net.work: Underst.anding the distinguishing
features. In Feldman. Y. and Bruckstein, A., edit.ors, Pmceedings of the 8th Ismeli
Conference on AICll, pages 113-123. Elsevier.
Intrator, N., Reisfeld, D., and YeshUl'u 11 , Y. (1992). Face recognition using a hybrid
supervised/unsupervised neural network. Preprint.
Intrator, N. and Tajchman, G. (1991). Supervised and unsupervised feature extraction
from a cochlear model for speech recognition. In Juang, B. H., Kung, S. Y., and
Kamm, C. A., editors, Neuml NetwOJ?J.;s for Signal Pmcessing - Proceedings of the
1991 IEEE WOJ'kshop, pages 460-469. IEEE Press, New York, NY.
Jones, L. (1987). On a conjecture of huber concerning t.he cOllvergE'nce of projection pursuit
regression. Annals of Statistics. 15:880-882.
Jones, M. C. and Sibson, R. (198i). What. is projection pursuit? (with discussion). J.
Roy. Statist. Soc .. Ser. A(150):1 ?-36.
9
10
Intrator
Kruskal, J. B. (1969). Toward a practical method which helps uncover the structure of the
set of multivariate observat.ions by finding the linear transformation which optimizes
a new 'index of condensat.ion'. In Milton, R. C. and Neider, J. A., editors, Statistical
Computation, pages 42i-440. Academic Press, New York.
Kruskal, J. B. (1972). Linear transformation of multivariate data to reveal clustering. In
Shepard, R. N., Romney, A. K., and Nerlove, S. B., editors, Multidimensional Scaling:
Theol'Y and Application in the Behavioral Sciences, I, Theory, pages 179-191. Seminar
Press, New York and London.
Le Cun, Y., Boser, B., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel,
L. (1989). Backpropagat.ion applied to handwritten zip code recognition. Neural
Computation, 1:541-551.
Lincoln, \V. P. and Skrzypek, J. (1990). Synergy of clustering multiple back-propagation
networks. In Touretzky. D. S. and Lippmann, R. P., editors, Advances in Neural
In/m'mation Pmcessing Systems, volume 2, pages 650-657. Morgan Kaufmanll, San
Mateo, CA.
Lyon, R. F. (1982). A comput.at.iollal model of filtering, det.ect.ion, and compression in
the cochlea. In Pmaedings IEEE Intenw/;o'Ual Con/et'ence on Acotlstics, Speech, and
Signal Pl'Ocessing. Paris, France.
Mougeot, M., Azencott, R., and Angeniol, B. (1991). Image compression with back propagation: Improvement. of t.he visual restoration using different. cost functions. Neural
NetlV07'ks, 4:467-476.
Nowlan, S. J. and HintoH, G. E. (1992). Simplifying
Neum/ Computotion. In press.
lIeurall1etwork~
by soft. weight-sharing.
Pearlmutter. B. A. and Rosenfeld, R. (1991). Chaitin-kohnogorov complexity and generalization in Heural networks. III Lippmann, R. P., Moody. J. E., and Touretzky,
D. S., editors, Adv(ltICfS in Neumlln/ol'flwtion Pl'Ocessillg Systems, volume 3, pages
925-931. Morgan I\aufmanll, San Mateo, CA.
Perrone, M. P. and Coop~r, L. N. (1992}. When lletworks disagree: Generalized ensemble
method for neural net.works. In Mammone, R. J. and Zeevi, Y., editors, Neural
Networks: Theory (mel .4ppiicn/.iol1s, volume 2. Aca.demic Press.
Slaney, M. (1988). Lyoll's cochlear model. Technical repOl?t, Apple Corporat.e Library,
Cupertino, CA 95014.
Switzer, P. (1970). Numerical c1assificat.ion. In BarIlelt.. V., edit.or, Geostatistics. Plenum
Press, New York.
Tajchmall, G. N. and Intrator, N. (1992). Phonet.ic classification of T1MIT segments preprocessed with lyoll's cochlear model using a sllperviscd/un:mpenrised hybrid neural
network. [n P"oct'Cdings Itllenwliolwl COII/CI'CtlCC on Spoh'/l Language Processing,
Banff, Albert.a, Canada.
Turk, M. and Pent.land, A. (1991). Eigc'lIfaces for recognit.ion. 1.
3:71-86.
0/ Cognitive Netl1'Oscience,
Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., and Lang, I<. {1989}. Phoneme
recognition using time-delay neura.! net.works. IEEE Transoctions on ASSP, 37:328339.
| 646 |@word mild:1 neurophysiology:1 version:2 underst:1 polynomial:2 compression:3 norm:1 nd:1 ivit:1 seitz:1 seek:1 simplifying:1 awij:1 initial:1 erms:1 nowlan:2 lang:1 numerical:1 plasticity:2 nemal:1 ial:3 erat:1 math:1 location:1 ional:4 banff:1 sigmoidal:2 along:1 c2:1 direct:3 become:2 ect:1 edelman:1 combine:1 behavioral:1 huber:4 dist:3 multi:3 brain:1 ol:1 relying:1 kamm:1 actual:1 lyon:2 ehman:1 increasing:1 becomes:1 project:7 estimating:1 underlying:4 medium:1 israel:1 anj:1 pel:1 what:1 ail:1 ret:1 finding:2 transformation:2 multidimensional:1 xd:2 estimat:3 universit:1 classifier:1 ser:1 unit:5 ly:2 yn:1 appear:1 local:1 ext:1 ure:1 k:4 mateo:3 suggests:1 appl:1 ramat:1 equa:1 lte:1 practical:2 camera:1 practice:2 ance:1 irregularity:1 procedure:1 stuetzle:2 rator:3 area:1 asymptotics:1 universal:2 resid:1 empirical:2 mult:1 projection:52 suggest:1 get:1 layered:1 plast:1 mea:1 context:4 traiuing:1 risk:4 fruitful:1 imposed:1 demonstrat:1 clothiaux:2 wit:6 his:3 stability:1 exploratory:8 notion:1 plenum:1 annals:5 synapt:1 pt:1 anding:1 distinguishing:1 agreement:1 pa:2 referee:1 roy:1 hose:1 particularly:1 recognition:8 netwol:1 database:2 cloud:1 preprint:1 thousand:1 adv:1 sect:1 equat:2 complexity:3 hese:1 depend:2 observat:1 segment:1 funct:3 efficiency:2 uh:1 various:2 represented:1 approa:1 effective:1 london:1 ction:1 sejnowski:1 artificial:1 recognit:1 nev:1 woj:1 kshop:1 mammone:1 whose:1 ive:1 coop:1 favor:1 statistic:5 rosenfeld:2 hanazawa:1 final:1 net:8 propose:1 chaitin:1 combining:1 lincoln:2 gold:3 description:1 pronounced:1 ent:2 hrough:1 oken:1 juang:1 nin:1 owij:1 object:3 help:1 ac:1 stat:1 ij:5 nearest:1 op:1 soc:1 implemented:1 implies:1 met:3 differ:1 direction:2 concentrate:1 aud:1 rei:1 demic:1 elimination:1 ao:1 generalization:3 sumption:1 clustered:1 paramet:1 pl:4 considered:2 hall:6 normal:1 ic:3 zeevi:1 kruskal:3 achieves:2 foundat:1 switzer:2 estimation:5 purpose:1 applicable:2 wet:1 jackel:1 edit:2 concurrent:1 wl:1 hubbard:1 iojl:1 faithfully:1 mit:1 concurrently:1 ribution:1 gaussian:2 mation:1 ck:3 office:4 naval:2 improvement:1 multimodalities:1 romney:1 inst:1 helpful:1 elsevier:1 hidden:7 her:1 ical:1 bienenstock:3 theol:1 france:1 pixel:2 classification:6 ill:2 among:1 distribut:1 development:1 smoothing:2 special:1 field:1 construct:1 extraction:2 jones:5 unsupervised:8 minimized:3 connectionist:1 spline:1 inhibited:1 few:1 hint:1 randomly:1 ime:1 ve:1 national:1 ima:1 replaced:1 stu:1 pmcessing:2 vowel:3 suit:1 friedman:10 henderson:1 capable:1 desired:2 re:1 theoretical:1 ivat:1 ocessing:1 soft:2 ence:1 goodness:1 restoration:1 cost:3 deviation:4 subset:1 ory:1 usefulness:1 delay:1 aining:2 mot:1 connect:1 gd:1 st:3 density:1 person:1 eel:1 michael:1 iy:1 moody:1 jo:1 opposed:1 slaney:2 activat:1 american:1 cognitive:2 til:1 rly:1 li:1 wk:1 int:11 ely:1 ad:1 depends:1 tion:1 try:1 performed:1 lab:1 tli:1 tukey:2 neider:1 aca:1 parallel:1 capability:1 timit:1 alld:1 minimize:1 square:1 il:1 kaufmann:1 variance:1 efficiently:1 maximized:1 yield:1 ensemble:3 ecture:1 azencott:1 angeniol:1 phoneme:1 handwritten:1 foi:1 disc:1 emphasizes:2 produced:4 served:1 apple:1 ah:1 llt:1 errol:1 touretzky:3 sharing:3 ed:7 synaptic:2 energy:3 turk:2 e2:1 associated:1 con:1 stop:2 knowledge:3 conversation:1 dimensionality:3 cj:3 ou:1 uncover:1 back:10 appears:1 feed:2 supervised:5 classifica:1 response:2 improved:2 illput:1 formulation:1 done:1 hank:1 implicit:1 binocular:1 hand:1 approximat:3 nonlinear:1 propagation:6 voiceless:3 aj:3 gray:2 reveal:1 aviv:1 brown:1 mougeot:1 adjacent:1 ll:1 mel:1 rat:2 generalized:1 ridge:7 minimizat:3 performs:1 pearlmutter:1 image:10 lwl:1 novel:1 recently:2 netowrks:1 shepard:1 volume:3 discussed:1 he:38 association:2 cupertino:1 significant:1 composition:2 feldman:1 ai:1 smoothness:1 language:1 han:1 cortex:2 gj:3 add:1 aligning:1 multivariate:2 recent:1 lcj:1 optimizes:1 certain:1 yi:1 morgan:3 additional:3 zip:1 llsing:2 semi:1 ii:3 signal:2 multiple:1 smooth:1 ing:3 technical:1 characterized:1 af:1 academic:1 concerning:1 slll:1 y:2 iterat:2 ile:1 variant:2 regression:12 ae:1 ained:2 albert:1 cochlea:1 tailored:1 achieved:3 ion:50 addition:2 modality:3 ot:1 probably:1 comment:1 call:1 extracting:1 feedforward:1 assumpt:1 iii:2 wn:1 ioll:7 fit:1 architecture:2 restrict:1 idea:1 det:3 motivated:3 fullction:1 munro:1 penalty:6 speech:6 york:4 backpropagat:1 autoassociative:1 penalt:3 ican:1 useful:7 detailed:1 aimed:1 nonparametric:1 locally:1 statist:1 skrzypek:2 bot:1 sign:1 ame:1 ist:1 sibson:3 threshold:2 preprocessed:1 pj:1 uw:1 vast:1 sum:4 nce:1 tlh:1 family:1 almost:1 wok:1 ric:2 informat:1 scaling:1 layer:3 fl:1 summer:1 activity:2 constraint:7 archi:1 loeve:2 nathan:1 leon:1 performing:1 smoot:2 conjecture:1 department:2 waibel:1 combination:1 perrone:2 hoo:1 smaller:1 cun:2 biologically:2 modification:2 projecting:1 erm:1 taken:1 equation:1 hness:1 needed:1 imation:1 ge:1 milton:1 pursuit:28 available:2 eight:1 denker:1 intrator:22 appropriate:1 hat:5 original:1 neuml:1 remaining:1 clustering:2 graphical:1 const:1 archit:1 neura:1 dat:3 seeking:1 added:2 reconst:1 parametric:1 rt:1 propagat:2 gradient:1 ow:1 tajchman:3 parametrized:1 addit:1 cochlear:4 toward:1 code:1 index:16 condit:1 cont:2 minimizing:3 ying:1 difficult:1 potentially:1 negative:1 implementation:1 fjt:1 disagree:1 observation:2 neuron:8 coii:1 howard:1 hinton:2 assp:1 canada:1 introduced:2 namely:2 pair:1 required:1 paris:1 connection:1 bel:1 bcm:2 boser:1 geostatistics:1 pent:2 usually:1 below:1 tau:1 ual:1 hybrid:5 eh:1 ator:2 normality:4 brief:1 library:1 ppr:5 extract:1 ailed:1 text:1 prior:6 review:1 l2:1 val:1 embedded:1 bear:1 interesting:3 filtering:1 approximator:1 foundation:1 degree:1 conveyed:1 editor:7 land:2 token:2 supported:1 last:1 theo:1 zle:1 neighbor:1 iud:1 face:4 slice:1 raining:2 cortical:2 dimension:2 forward:2 made:1 projected:3 avoided:1 san:3 viv:1 far:1 compl:1 approximate:2 emphasize:2 biilthoff:1 lippmann:2 feat:1 synergy:1 bruckstein:1 overfitting:1 consonant:2 xi:1 shikano:1 surfa:1 search:3 un:1 porat:1 nature:1 channel:1 ca:4 decoupling:1 pmceedings:1 tel:1 init:2 hornik:2 expansion:2 pping:1 cl:1 da:1 main:1 freedman:1 associat:1 xu:1 vve:1 tl:1 cooper:9 ny:1 ational:1 seminar:1 ovel:1 explicit:1 wish:1 comput:4 pul:1 xl:1 ix:1 deprivation:1 specific:6 baek:1 er:3 alt:1 normalizing:1 ih:1 adding:2 ci:1 nsion:1 nat:1 hod:4 sparseness:1 karhunen:2 orientat:1 entropy:1 army:2 likely:1 visual:6 partially:1 epp:2 uring:1 aa:1 ch:1 extracted:1 ma:1 prop:2 oct:1 raint:1 formulated:1 classificat:4 ann:1 shared:1 change:1 included:1 specifically:1 reducing:2 usiug:1 decouple:1 principal:1 total:2 e:1 la:1 experimental:1 est:3 select:1 aken:1 support:1 kung:1 tested:1 |
6,036 | 6,460 | Dynamic matrix recovery from incomplete
observations under an exact low-rank constraint
Liangbei Xu
Mark A. Davenport
Department of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA 30318
lxu66@gatech.edu mdav@gatech.edu
Abstract
Low-rank matrix factorizations arise in a wide variety of applications ? including
recommendation systems, topic models, and source separation, to name just a few.
In these and many other applications, it has been widely noted that by incorporating temporal information and allowing for the possibility of time-varying models,
significant improvements are possible in practice. However, despite the reported
superior empirical performance of these dynamic models over their static counterparts, there is limited theoretical justification for introducing these more complex
models. In this paper we aim to address this gap by studying the problem of recovering a dynamically evolving low-rank matrix from incomplete observations. First,
we propose the locally weighted matrix smoothing (LOWEMS) framework as one
possible approach to dynamic matrix recovery. We then establish error bounds for
LOWEMS in both the matrix sensing and matrix completion observation models.
Our results quantify the potential benefits of exploiting dynamic constraints both
in terms of recovery accuracy and sample complexity. To illustrate these benefits
we provide both synthetic and real-world experimental results.
1
Introduction
Suppose that X ? Rn1 ?n2 is a rank-r matrix with r much smaller than n1 and n2 . We observe X
through a linear operator A : Rn1 ?n2 ? Rm ,
y = A(X),
y ? Rm .
In recent years there has been a significant amount of progress in our understanding of how to recover
X from observations of this form even when the number of observations m is much less than the
number of entries in X. (See [8] for an overview of this literature.) When A is a set of weighted linear
combinations of the entries of X, this problem is often referred to as the matrix sensing problem.
In the special case where A samples a subset of entries of X, it is known as the matrix completion
problem. There are a number of ways to establish recovery guarantee in these settings. Perhaps the
most popular approach for theoretical analysis in recent years has focused on the use of nuclear norm
minimization as a convex surrogate for the (nonconvex) rank constraint [1, 3, 4, 5, 6, 7, 15, 19, 21, 22].
An alternative, however is to aim to directly solve the problem under an exact low-rank constraint.
This leads a non-convex optimization problem, but has several computational advantages over most
approaches to minimizing the nuclear norm and is widely used in large-scale applications (such
as recommendation systems) [16]. In general, popular algorithms for solving the rank-constrained
models ? e.g., alternating minimization and alternating gradient descent ? do not have as strong of
convergence or recovery error guarantees due to the non-convexity of the rank constraint. However,
there has been significant progress on this front in recent years [11, 10, 12, 13, 14, 23, 25], with many
of these algorithms now having guarantees comparable to those for nuclear norm minimization.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Nearly all of this existing work assumes that the underlying low-rank matrix X remains fixed
throughout the measurement process. In many practical applications, this is a tremendous limitation.
For example, users? preferences for various items may change (sometimes quite dramatically) over
time. Modelling such drift of user?s preference has been proposed in the context of both music and
movies as a way to achieve higher accuracy in recommendation systems [9, 17]. Another example
in signal processing is dynamic non-negative matrix factorization for the blind signal separation
problem [18]. In these and many other applications, explicitly modelling the dynamic structure in the
data has led to superior empirical performance. However, our theoretical understanding of dynamic
low-rank matrix recovery is still very limited.
In this paper we provide the first theoretical results on the dynamic low-rank matrix recovery problem.
We determine the sense in which dynamic constraints can help to recover the underlying time-varying
low-rank matrix in a particular dynamic model and quantify this impact through recovery error
bounds. To describe our approach, we consider a simple example where we have two rank-r matrices
X 1 and X 2 . Suppose that we have a set of observations for each of X 1 and X 2 , given by
y i = Ai X i , i = 1, 2.
The na?ve approach is to use y 1 to recover X 1 and y 2 to recover X 2 separately. In this case the
number of observations required to guarantee successful recovery is roughly mi ? C i r max(n1 , n2 )
for i = 1, 2 respectively, where C 1 , C 2 are fixed positive constants (see [4]). However, if we know
that X 2 is close to X 1 in some sense (for example, if X 2 is a small perturbation of X 1 ), then the
above approach is suboptimal both in terms of recovery accuracy and sample complexity, since in
this setting y 1 actually contains information about X 2 (and similarly, y 2 contains information about
X 1 ). There are a variety of possible approaches to incorporating this additional information. The
approach we will take is inspired by the LOWESS (locally weighted scatterplot smoothing) approach
from non-parametric regression. In the case of this simple example, if we look just at the problem of
estimating X 2 , our approach reduces to solving a problem of the form
2
2
2 2
1
2
1 2
2
min
kA
(X
)
?
y
k
+
?kA
(X
)
?
y
k
s.t.
rank
X
? r,
2
2
2
X
where ? is a parameter that determines how strictly we are enforcing the dynamic constraint (if
X 1 is very close to X 2 we can set ? to be larger, but if X 1 is far from X 2 we will set it to be
comparatively small). This approach generalizes naturally to the locally weighted matrix smoothing
(LOWEMS) program described in Section 2. Note that it has a (simple) convex objective function, but
a non-convex rank constraint. Our analysis in Section 3 shows that the proposed program outperforms
the above na?ve recovery strategy both in terms of recovery accuracy and sample complexity.
We should emphasize that the proposed LOWEMS program is non-convex due to the exact lowrank constraint. Inspired by previous work on matrix factorization, we propose using an efficient
alternating minimization algorithm (described in more detail in Section 4). We explicitly enforce the
low-rank constraint by optimizing over a rank-r factorization and alternately minimize with respect
to one of the factors while holding the other one fixed. This approach is popular in practice since
it is typically less computationally complex than nuclear norm minimization based algorithms. In
addition, thanks to recent work on global convergence guarantees for alternating minimization for
low-rank matrix recovery [10, 13, 25], one can reasonably expect similar convergence guarantees to
hold for alternating minimization in the context of LOWEMS, although we leave the pursuit of such
guarantees for future work.
To empirically verify our analysis, we perform both synthetic and real world experiments, described
in Section 5. The synthetic experimental results demonstrate that LOWEMS outperforms the na?ve
approach in practice both in terms of recovery accuracy and sample complexity. We also demonstrate
the effectiveness of LOWEMS in the context of recommendation systems.
Before proceeding, we briefly state some of the notation that we will use throughout. For a vector
x ? Rn , we let kxkp denote the standard `p norm. Given a matrix X ? Rn1 ?n2 , we use Xi: to denote
the ith row of X and X:j to denote the j th column of X. We let kXkF denote the the Frobenius
norm, kXk2 the operator norm, kXk? the nuclear norm, and kXk? = maxi,j |Xij
P| the elementwise infinity norm. Given a pair of matrices X, Y ? Rn1 ?n2 , we let hX, Y i = i,j Xij Yij =
Tr Y T X denote the standard inner product. Finally, we let nmax and nmin denote max{n1 , n2 }
and min{n1 , n2 } respectively.
2
2
Problem formulation
The underlying assumption throughout this paper is that our low-rank matrix is changing over time
during the measurement process. For simplicity we will model this through the following discrete
dynamic process: at time t, we have a low-rank matrix X t ? Rn1 ?n2 with rank r, which we assume
is related to the matrix at previous time-steps via
X t = f (X 1 , . . . , X t?1 ) + t ,
t
where t represents noise. Then we observe each X t through a linear operator At : Rn1 ?n2 ? Rm ,
t
y t = At (X t ) + z t , y t , z t ? Rm ,
(1)
t
where z is measurement noise. In our problem we will suppose that we observe up to d time steps,
and our goal is to recover {X t }dt=1 jointly from {y t }dt=1 .
The above model is sufficiently flexible to incorporate a wide variety of dynamics, but we will
make several simplifications. First, we note that we can impose the low-rank constraint explicitly
T
by factorizing X t as X t = U t (V t ) , U t ? Rn1 ?r , V t ? Rn2 ?r . In general both U t and V t may
be changing over time. However, in some applications, it is reasonable to assume that only one set
of factors is changing. For example, in a recommendation system where our matrix represent user
preferences, if the rows correspond to items and the columns correspond to users, then U t contains
the latent properties of the items and V t models the latent preferences of the users. In this context
it is reasonable to assume that only V t changes over time [9, 17], and that there is a fixed matrix
U (which we may assume to be orthonormal) such that we can write X t = U V t for all t. Similar
arguments can be made in a variety of other applications, including personalized learning systems,
blind signal separation, and more.
Second, we assume a Markov property on f , so that X t (or equivalently, V t ) only depends on the
previous X t?1 (or V t?1 ). Furthermore, although other dynamic models could be accommodated, for
the sake of simplicity in our analysis we consider the simple model on V t where
V t = V t?1 + t , t = 2, . . . , d.
(2)
t
We will also assume that both and the measurement noise z t are i.i.d. zero-mean Gaussian noise.
To simplify our discussion, we will assume that our goal is to recover the matrix at the most recent
time-step, i.e., we wish to estimate X d from {y t }dt=1 . Our general approach can be stated as follows.
The LOWEMS estimator is given by the following optimization program:
? d = arg min L (X) = arg min 1
X
X?C(r)
X?C(r) 2
d
X
2
wt
At (X) ? y t
2 ,
(3)
t=1
{wt }dt=1
where C(r) = {X ? Rn1 ?n2 : rank(X) ? r}, and
are non-negative weights. We further
Pd
assume t=1 wt = 1 to avoid ambiguity. In the following section we provide bounds on the
performance of the LOWEMS estimator for two common choices of operators At .
3
Recovery error bounds
? d from (3), we define the recovery error to be ?d := X
? d ? X d . Our goal in
Given the estimator X
? d ? X d kF under two common observation models. Our
this section will be to provide bounds on kX
analysis builds on the following (deterministic) inequality.
? d by (3) and (9) satisfies
Proposition 3.1. Both the estimator X
d
d
X
?
t
X
d
2
t?
t
t
d
wt A ? 2 ? 2 2r
wt A h ? z
? F ,
(4)
t=1
t=1
2
where ht = At X d ? X t and At? is the adjoint operator of At .
This is a deterministic result that holds for any set of {At }. The remaining work is to lower bound the
LHS of (4), and upper bound the RHS of (4) for concrete choices of {At }. In the following sections
we derive such bounds in the settings of both Gaussian matrix sensing and matrix completion. For
simplicity and without loss of generality, we will assume m1 = . . . = md =: m0 , so that the total
number of observations is simply m = dm0 .
3
3.1
Matrix sensing setting
For the matrix sensing problem, we will consider the case where all operators At correspond to
Gaussian measurement ensembles, defined as follows.
Definition 3.2. [4] A linear operator A : Rn1 ?n2 ? Rm is a Gaussian measurement ensemble if
we can express each entry of A (X) as [A (X)]i = hAi , Xi for a matrix Ai whose entries are i.i.d.
according to N (0, 1/m), and where the matrices A1 , . . . , Am are independent from each other.
Also, we define the matrix restricted isometry property (RIP) for a linear map A.
Definition 3.3. [4] For each integer r = 1, . . . , nmin , the isometry constant ?r of A is the smallest
quantity such that
2
2
2
(1 ? ?r ) kXkF ? kA (X)k2 ? (1 + ?r ) kXkF
holds for all matrices X of rank at most r.
An important result (that we use in the proof of Theorem 3.4) is that Gaussian measurement ensembles
satisfy the matrix RIP with high probability provided m ? Crnmax . See, for example, [4] for details.
To obtain an error bound in the matrix sensing case we lower bound the LHS of (4) using the matrix
RIP and upper bound the stochastic error (the RHS of (4)) using a covering argument. The following
is our main result in the context of matrix setting.
Theorem 3.4. Suppose that we are given measurements as in (1) where all At ?s are Gaussian
measurement ensembles. Assume that X t evolves
according to (2) and has rank r. Further assume
t
2
that the measurement
noise
z
is
i.i.d.
N
0,
?
for 1 ? t ? d and that the perturbation noise t is
1
2
i.i.d. N 0, ?2 for 2 ? t ? d. If
(
)
d
X
2
m0 ? D1 max nmax r
wt , nmax ,
(5)
t=1
? d from (3) satisfies
where D1 is a fixed positive constant, then the estimator X
!
d
d?1
X
X
d
2
?
? C0
wt2 ?12 +
(d ? t)wt2 ?22 nmax r
F
t=1
(6)
t=1
with probability at least P1 = 1 ? dC1 exp (?c1 n2 ), where C0 , C1 , c1 are positive constants.
If we choose the weights as wd = 1 and wt = 0 for 1 ? t ? d ? 1, the bound in Theorem 3.4
reduces to a bound matching classical (static) matrix recovery results (see, for example, [4] Theorem
2.4). Also note that in this case Theorem 3.4 implies exact recovery when the sample complexity
is O(rn/d). In order to help interpret this result for other choices of the weights, we note that for a
given set of parameters, we can determine the optimal weights that will minimize this bound. Towards
this end, we define ? := ?22 /?12 and set pt = (d ? t), 1 ? t ? d. Then one can calculate the optimal
weights by solving the following quadratic program:
d
{wt? }t=1 = P arg min
d
X
t wt =1; wt ?0 t=1
wt2 +
d?1
X
pt ?wt2 .
(7)
t=1
Using the method of Lagrange multipliers one can show that (7) has the analytical solution:
1
1
wj? = Pd
, 1 ? j ? d.
1
1 + pj ?
i=1
(8)
1+pi ?
A simple special case occurs when ?2 = 0. In this case all V t ?s are the same, and the optimal weights
go to wt = d1 for all t. In contrast, when ?2 grows large the weights eventually converge to wd = 1
and wt = 0 for all t 6= d. This results in essentially using only y d to recover X d and ignoring the rest
of the measurements. Combining these, we note that when the ?2 is small, we can gain by a factor of
approximately d over the na?ve strategy that ignores dynamics and tries to recover X d using only y d .
Pd
Notice also that the minimum sample complexity is proportional to t=1 wt2 when r/d is relatively
large. Thus, when ?2 is small, the required number of measurements can be reduced by a factor of d
compared to what would be required to recover X d using only y d .
4
3.2
Matrix completion setting
For the matrix completion problem, we consider the following simple uniform sampling ensemble:
Definition 3.5. A linear operator A : Rn1 ?n2 ? Rm is a uniform sampling ensemble (with
replacement) if all sensing matrices Ai are i.i.d. uniformly distributed on the set
X = ej (n1 ) eTk (n2 ) , 1 ? j ? n1 , 1 ? k ? n2 ,
where ej (n) are the canonical basis vectors in Rn . We let p = m0 /(n1 n2 ) denote the fraction of
sampled entries.
For this observation architecture, our analysis is complicated by the fact that it does not satisfy the
matrix RIP. (A quick problematic example is a rank-1 matrix with only one non-zero entry.) To handle
this we follow the typical approach and restrict our focus to matrices that satisfy certain incoherence
properties.
Definition 3.6. (Subspace incoherence [10]) Let U ? Rn?r be the orthonormal basis
for an r?
dimensional subspace U, then the incoherence of U is defined as ?(U) := maxi?[n] ?nr
eTi U
2 ,
where ei denotes the ith standard basis vector. We also simply denote ?(span(U )) as ?(U ).
Definition 3.7. (Matrix incoherence [13]) A rank-r matrix X ? Rn1 ?n2 with SVD X = U ?V T is
incoherent with parameter ? if
?
?
? r
? r
kU:i k2 ? ?
for any i ? [n1 ] and kV:j k2 ? ?
for any j ? [n2 ],
n1
n2
i.e., the subspaces spanned by the columns of U and V are both ?-incoherent.
The incoherence assumption guarantees that X is far from sparse, which make it possible to recover
X from incomplete measurements since a measurement contains roughly the same amount of
information for all dimensions.
d
To proceed we also assume that
dthe
matrix X has ?bounded spikiness? in that the maximum entry
d
of X is bounded by a, i.e., X ? ? a. To exploit the spikiness constraint below we replace the
optimization constraints C (r) in (3) with C (r, a) :== {X ? Rn1 ?n2 : rank (X) ? r, kXk? ? a}:
d
X
2
? d = arg min L (X) = arg min 1
wt
At (X) ? y t
2 .
X
2
X?C(r,a)
X?C(r,a)
t=1
(9)
Note that Proposition 3.1 still holds for (9).
To obtain an error bound in the matrix completion case, we lower bound the LHS of 4 using a
restricted convexity argument (see, for example, [20]) and upper bound the RHS using matrix
Bernstein inequality. The result of this approach is the following theorem.
Theorem 3.8. Suppose that we are given measurements as in (1) where all At ?s are uniform sampling
ensembles.
that X t evolves according to (2), has rank r, and is incoherent with parameter
Assume
?0 and
X d
? ? a. Further assume that the perturbation noise and the measurement noise satisfy
the same assumptions in Theorem 3.4. If
m0 ? D2 nmin log2 (n1 + n2 )?0 (w),
maxt wt2 ((d?t)?20 r?22 /n1 +?12 )
Pd
,
2
2
2
t=1 wt ((d?t)?2 +?1 )
? d from (9) satisfies
then the estimator X
?
?
s
Pd
?
?
2 log(n + n )
d
2
w
1
2
t=1 t
?
? max B1 := C2 a2 n1 n2
,
B
,
2
F
?
?
m0
where ?0 (w) =
(10)
with probability at least P1 = 1 ? 5/(n1 + n2 ) ? 5dnmax exp(?nmin ), where
!
!
d
d?1
d
X
X
X
C3 rn21 n22 log(n1 + n2 )
2 2
2 2
2 2
B2 =
wt ?1 +
(d ? t)wt ?2 +
wt a ,
nmin m0
t=1
t=1
t=1
and C2 , C3 , D2 are absolute positive constants.
5
(11)
(12)
If we choose the weights as wd = 1 and wt = 0 for 1 ? t ? d ? 1, the bound in Theorem 3.8
reduces to a result comparable to classical (static) matrix completion results (see, for example, [15]
Theorem 7). Moreover, from the B2 term in (11), we obtain the same dependence on m as that of (6),
i.e., 1/m. However, there are also a few key differences between Theorem 3.4 and our results for
matrix completion. In general the bound is loose in several aspects compared to the matrix sensing
bound. For example,
? when m0 is small, B1 actually dominates, in which case the dependence on
m is actually 1/ m instead of 1/m. When m0 is sufficiently large, then B2 dominates, in which
case we can consider two cases. The first case corresponds to when a is relatively large compared to
?1 , ?2 ? i.e., the low-rank matrix is spiky. In this case the term containing a2 in B2 dominates, and
the optimal weights are equal weights of 1/d. This occurs because the term involving a dominates
and there is little improvement to be obtained by exploiting temporal dynamics. In the second case,
when a is relatively small compared to ?1 , ?2 (which is usually the case in practice), the bound can
be simplified to
!!
d
d?1
X
X
c3 rn21 n22 log(n1 + n2 )
2
k?kF ?
wt2 ?12 +
(d ? t)wt2 ?22
.
nmin m0
t=1
t=1
The above bound is much more similar to the bound in (6) from Theorem 3.4. In fact, we can also
obtain the optimal weights by solving the same quadratic program as (7).
When n1 ? n2 , the sample complexity is ?(nmin log2 (n1 + n2 )?0 (w)). In this case Theorem 3.8
also implies a similar sample complexity reduction as we observed in the matrix sensing setting.
However, the precise relations between sample complexity and weights wt ?s are different in these
two cases (deriving from the fact that the proof uses matrix Bernstein inequalities in the matrix
completion setting rather than concentration inequalities of Chi-squared variables as in the matrix
sensing setting).
4
An algorithm based on alternating minimization
As noted in Section 2, any rank-r matrix can be factorized as X = U V T where U is n1 ? r and V is
n2 ? r, therefore the LOWEMS estimator in (3) can be reformulated as
? d = arg min L (X) = arg min
X
X?C(r)
d
X
1
X=U V T t=1
2
2
wt
At U V T ? y t
2 .
(13)
The above program can be solved by alternating minimization (see [17]), which alternatively minimizes the objective function over U (or V ) while holding V (or U ) fixed until a stopping criterion is
reached. Since the objective function is quadratic, each step in this procedure reduces to conventional
weighted least squares, which can be solved via efficient numerical procedures. Theoretical guarantees for global convergence of alternating minimization for the static matrix sensing/completion
problem have recently been established in [10, 13, 25] by treating the alternating minimization as
a noisy version of the power method. Extending these results to establish convergence guarantees
for (13) would involve analyzing a weighted power method. We leave this analysis for future work,
but expect that similar convergence guarantees should be possible in this setting.
5
5.1
Simulations and experiments
Synthetic simulations
Our synthetic simulations consider both matrix sensing and matrix completion, but with an emphasis
on matrix completion. We set n1 = 100, n2 = 50, d = 4 and r = 5. We consider two baselines:
baseline one is only using y d to recover X d and simply ignoring y 1 , . . . y d?1 ; baseline two is using
{y t }dt=1 with equal weights. Note that both of these can be viewed as special cases of LOWEMS with
weights (0, . . . , 0, 1) and ( d1 , d1 , . . . , d1 ) respectively. Recalling the formula for the optimal choice of
weights in (8), it is easy to show that baseline one is equivalent to the case where ? = (?22 )/(?12 ) ? ?
and the baseline two equivalent to the case where ? ? 0. This also makes intuitive sense since
? ? ? means the perturbation is arbitrarily large between time steps, while ? ? 0 reduces to the
static setting.
6
0.03
0.06
Baseline one
Baseline two
LOWEMS
0.02
(b)
0.015
0.01
0.04
0.03
0.02
0.01
0.005
0 ?2
10
Baseline one
Baseline two
LOWEMS
0.05
Recovery Error
(a)
Recovery Error
0.025
?1
0
10 -2
0
10
?2
10
10 -1
10 0
10 1
?2
Figure 1: Recovery error under different levels of perturbation noise. (a) matrix sensing. (b) matrix
completion.
1
Baseline one
LOWEMS
Baseline two
Sample Complexity p
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2 ?3
10
?2
10
?1
10
?2
0
10
1
10
Figure 2: Sample complexity under different levels of perturbation noise (matrix completion).
1). Recovery error. In this simulation, we set m0 = 4000 and set the measurement noise level ?1
to 0.05. We vary the perturbation noise level ?2 . For every pair of (?1 , ?2 ) we perform 10 trials,
2
2
and show the average relative recovery error
?d
F /
X d
F . Figure 1 illustrates how LOWEMS
reduces the recovery error compared to our baselines. As one can see, when ?2 is small, the optimal
?, i.e., ?22 /?12 , generates nearly equal weights (baseline two), reducing recovery error approximately
by a factor of 4 over baseline one, which is roughly equal to d as expected. As ?2 grows, the recovery
error of baseline two will increase dramatically due to the perturbation noise. However in this case
the optimal ? of LOWEMS grows with it, leading to a more uneven weighting and to somewhat
diminished performance gains. We also note that, as expected, LOWEMS converges to baseline one
when ?2 is large.
2). Sample complexity. In the interest of conciseness we only provide results here for the matrix
completion setting (matrix sensing yields broadly similar results). In this simulation we vary
the fraction of observed entries p to empirically find the minimum sample complexity required to
guarantee successful recovery (defined as a relative error ? 0.08). We compare the sample complexity
of the proposed LOWEMS to baseline one and baseline two under different perturbation noise level
?2 (?1 is set as 0.02). For a certain ?2 , the relative recovery error is the averaged over 10 trials.
Figure 2 illustrates how LOWEMS reduces the sample complexity required to guarantee successful
recovery. When the perturbation noise is weaker than the measurement noise, the sample complexity
can be reduced approximately by a factor of d compared to baseline one. When the perturbation noise
is much stronger than measurement noise, the recovery error of baseline two will increase due to the
perturbation noise and hence the sample complexity increase rapidly. However in this case proposed
LOWEMS still achieves relatively small sample complexity and its sample complexity converges to
baseline one when ?2 is relatively large.
7
0.78
0.877
d=1
d=3
d=6
d=8
0.775
0.876
0.77
0.874
(b)
RMSE
(a)
RMSE
0.875
0.765
0.76
0.873
0.755
0.872
0.75
0.871
1
3
6
0.745
10 -2
8
# of bins
10 -1
10 0
10 1
?
Figure 3: Experimental results on truncated Netflix dataset. (a) Testing RMSE vs. number of time
steps. (b) Validation RMSE vs. ?.
5.2
Real world experiments
We next test the LOWEMS approach in the context of a recommendation system using the (truncated)
Netflix dataset. We eliminate those movies with few ratings, and those users rating few movies, and
generate a truncated dataset with 3199 users, 1042 movies, 2462840 ratings, and hence the fraction of
visible entries in the rating matrix is ? 0.74. All the ratings are distributed over a period of 2191 days.
For the sake of robustness, we additionally impose a Frobenius norm penalty on the factor matrices
U and V in (13). We keep the latest (in time) 10% of the ratings as a testing set. The remaining
ratings are split into a validation set and a training set for the purpose of cross validation. We divide
the remaining ratings into d ? {1, 3, 6, 8} bins respectively with same time period according to
their timestamps. We use 5-fold cross validation, and we keep 1/5 of the ratings from the dth bin
as a validation set. The number of latent factors r is set to 10. The Frobenius norm regularization
parameter ? is set to 1. We also note that in practice one likely has no prior information on ?1 ,
?2 and hence ?. However, we use model selection techniques like cross validation to select the
best ? incorporating the unknown prior information on measurement/perturbation noise. We use
root mean squared error (RMSE) to measure prediction accuracy. Since alternating minimization
uses a random initialization, we generate 10 test RMSE?s (using a boxplot) for the same testing set.
Figure 3(a) shows that the proposed LOWEMS estimator improves the testing RMSE significantly
with appropriate ?. Additionally, the performance improvement increases as d gets larger.
To further investigate how the parameter ? affects accuracy, we also show the validation RMSE
compared to ? in Figure 3(b). When ? ? 1, LOWEMS achieves the best RMSE on the validation
data. This further demonstrates that imposing an appropriate dynamic constraint should improve
recovery accuracy in practice.
6
Conclusion
In this paper we consider the low-rank matrix recovery problem in a novel setting, where one
of the factor matrices changes over time. We propose the locally weighted matrix smoothing
(LOWEMS) framework, and have established error bounds for LOWEMS in both the matrix sensing
and matrix completion cases. Our analysis quantifies how the proposed estimator improves recovery
accuracy and reduces sample complexity compared to static recovery methods. Finally, we provide
both synthetic and real world experimental results to verify our analysis and demonstrate superior
empirical performance when exploiting dynamic constraints in a recommendation system.
Acknowledge
This work was supported by grants NRL N00173-14-2-C001, AFOSR FA9550-14-1-0342, NSF
CCF-1409406, CCF-1350616, and CMMI-1537261.
8
References
[1] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal
rates in high dimensions. Ann. Stat., 40(2):1171?1197, 2012.
[2] P. B?hlmann and S. Van De Geer. Statistics for high-dimensional data: Methods, theory and applications.
Springer-Verlag Berlin Heidelberg, 2011.
[3] E. Cand?s and Y. Plan. Matrix completion with noise. Proc. IEEE, 98(6):925?936, 2010.
[4] E. Cand?s and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of
noisy random measurements. IEEE Trans. Inform. Theory, 57(4):2342?2359, 2011.
[5] E. Cand?s and B. Recht. Exact matrix completion via convex optimization. Found. Comput. Math.,
9(6):717?772, 2009.
[6] E. Cand?s and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans.
Inform. Theory, 56(5):2053?2080, 2010.
[7] M. Davenport, Y. Plan, E. van den Berg, and M. Wootters. 1-bit matrix completion. Inf. Inference,
3(3):189?223, 2014.
[8] M. Davenport and J. Romberg. An overview of low-rank matrix recovery from incomplete observations.
IEEE J. Select. Top. Signal Processing, 10(4):608?622, 2016.
[9] G. Dror, N. Koenigstein, Y. Koren, and M. Weimer. The Yahoo! music dataset and KDD-Cup?11. In Proc.
ACM SIGKDD Int. Conf. on Knowledge, Discovery, and Data Mining (KDD), San Diego, CA, Aug. 2011.
[10] M. Hardt. Understanding alternating minimization for matrix completion. In Proc. IEEE Symp. Found.
Comp. Science (FOCS), Philadelphia, PA, Oct. 2014.
[11] M. Hardt and M. Wootters. Fast matrix completion without the condition number. In Proc. Conf. Learning
Theory, Barcelona, Spain, June 2014.
[12] P. Jain and P. Netrapalli. Fast exact matrix completion with finite samples. In Proc. Conf. Learning Theory,
Paris, France, July 2015.
[13] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In
Proc. ACM Symp. Theory of Comput., Stanford, CA, June 2013.
[14] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. In Proc. Adv. in Neural
Processing Systems (NIPS), Vancouver, BC, Dec. 2009.
[15] O. Klopp. Noisy low-rank matrix completion with general sampling distribution. Bernoulli, 20(1):282?303,
2014.
[16] Y. Koren. The Bellkor solution to the Netflix grand prize, 2009.
[17] Y. Koren. Collaborative filtering with temporal dynamics. Comm. ACM, 53(4):89?97, 2010.
[18] N. Mohammadiha, P. Smaragdis, G. Panahandeh, and S. Doclo. A state-space approach to dynamic
nonnegative matrix factorization. IEEE Trans. Signal Processing, 63(4):949?959, 2015.
[19] S. Negahban and M. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional
scaling. Ann. Stat., 39(2):1069?1097, 2011.
[20] S. Negahban and M. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise. J. Machine Learning Research, 13(1):1665?1697, 2012.
[21] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Rev., 52(3):471?501, 2010.
[22] B. Recht, W. Xu, and B. Hassibi. Necessary and sufficient conditions for success of the nuclear norm
heuristic for rank minimization. In Proc. IEEE Conf. on Decision and Control (CDC), Cancun, Mexico,
Dec. 2008.
[23] R. Sun and Z.-Q. Luo. Guaranteed matrix completion via nonconvex factorization. In Proc. IEEE Symp.
Found. Comp. Science (FOCS), Berkeley, CA, Oct. 2015.
[24] J. A. Tropp. An introduction to matrix concentration inequalities. Found. Trends Mach. Learning,
8(1?2):1?230, 2015.
[25] T. Zhao, Z. Wang, and H. Liu. A nonconvex optimization framework for low rank matrix estimation. In
Proc. Adv. in Neural Processing Systems (NIPS), Montr?al, QC, Dec. 2015.
9
| 6460 |@word trial:2 version:1 briefly:1 norm:13 stronger:1 c0:2 d2:2 simulation:5 decomposition:1 tr:1 reduction:1 liu:1 contains:4 bc:1 outperforms:2 existing:1 ka:3 wd:3 luo:1 visible:1 numerical:1 timestamps:1 kdd:2 treating:1 v:2 item:3 ith:2 prize:1 fa9550:1 math:1 preference:4 c2:2 focs:2 symp:3 n22:2 expected:2 roughly:3 p1:2 cand:4 chi:1 inspired:2 little:1 spain:2 estimating:1 underlying:3 notation:1 provided:1 bounded:2 moreover:1 factorized:1 what:1 minimizes:1 dror:1 guarantee:13 temporal:3 berkeley:1 every:1 rm:6 k2:3 demonstrates:1 control:1 grant:1 positive:4 before:1 engineering:1 despite:1 mach:1 analyzing:1 incoherence:5 approximately:3 emphasis:1 initialization:1 dynamically:1 factorization:6 limited:2 averaged:1 fazel:1 practical:1 testing:4 practice:6 procedure:2 empirical:3 evolving:1 significantly:1 matching:1 nmax:4 get:1 ga:1 close:2 operator:8 selection:1 romberg:1 context:6 conventional:1 deterministic:2 map:1 quick:1 equivalent:2 go:1 latest:1 convex:8 focused:1 qc:1 simplicity:3 recovery:36 estimator:9 nuclear:7 orthonormal:2 spanned:1 deriving:1 oh:1 handle:1 justification:1 pt:2 suppose:5 diego:1 user:7 exact:6 rip:4 us:2 pa:1 trend:1 lowess:1 observed:2 electrical:1 solved:2 wang:1 calculate:1 wj:1 adv:2 sun:1 pd:5 convexity:3 complexity:20 comm:1 dynamic:20 solving:4 tight:1 bellkor:1 basis:3 various:1 jain:2 fast:2 describe:1 quite:1 whose:1 widely:2 solve:1 larger:2 stanford:1 heuristic:1 statistic:1 jointly:1 noisy:5 dthe:1 advantage:1 analytical:1 propose:3 product:1 combining:1 rapidly:1 achieve:1 adjoint:1 intuitive:1 frobenius:3 kv:1 exploiting:3 convergence:6 extending:1 leave:2 converges:2 help:2 illustrate:1 derive:1 completion:28 stat:2 koenigstein:1 lowrank:1 progress:2 aug:1 strong:2 netrapalli:2 recovering:1 implies:2 quantify:2 stochastic:1 bin:3 hx:1 proposition:2 yij:1 strictly:1 hold:4 sufficiently:2 exp:2 m0:10 vary:2 achieves:2 smallest:1 a2:2 purpose:1 estimation:2 proc:10 weighted:8 minimization:16 eti:1 gaussian:6 aim:2 rather:1 avoid:1 ej:2 varying:2 gatech:2 focus:1 june:2 improvement:3 rank:40 modelling:2 bernoulli:1 contrast:1 sigkdd:1 baseline:21 sense:3 am:1 inference:1 stopping:1 typically:1 eliminate:1 relation:1 france:1 tao:1 arg:7 flexible:1 yahoo:1 plan:3 smoothing:4 special:3 constrained:1 equal:4 having:1 sampling:4 represents:1 look:1 nearly:2 future:2 sanghavi:1 simplify:1 few:4 ve:4 replacement:1 n1:19 recalling:1 atlanta:1 montr:1 interest:1 possibility:1 investigate:1 mining:1 necessary:1 lh:3 incomplete:4 divide:1 accommodated:1 theoretical:5 minimal:1 column:3 kxkf:3 hlmann:1 introducing:1 entry:11 subset:1 uniform:3 successful:3 front:1 reported:1 synthetic:6 thanks:1 recht:3 grand:1 negahban:3 siam:1 concrete:1 na:4 squared:2 ambiguity:1 rn1:12 containing:1 choose:2 davenport:3 conf:4 zhao:1 leading:1 potential:1 parrilo:1 de:1 rn2:1 b2:4 int:1 satisfy:4 explicitly:3 blind:2 depends:1 try:1 root:1 reached:1 netflix:3 recover:11 complicated:1 rmse:9 collaborative:1 minimize:2 square:1 accuracy:9 ensemble:7 correspond:3 yield:1 comp:2 inform:2 definition:5 naturally:1 proof:2 mi:1 conciseness:1 static:6 gain:2 sampled:1 dataset:4 hardt:2 popular:3 knowledge:1 improves:2 actually:3 higher:1 dt:5 day:1 follow:1 formulation:1 generality:1 furthermore:1 just:2 spiky:1 nmin:7 until:1 tropp:1 ei:1 keshavan:1 perhaps:1 grows:3 name:1 verify:2 multiplier:1 counterpart:1 ccf:2 hence:3 regularization:1 alternating:12 during:1 covering:1 noted:2 criterion:1 demonstrate:3 novel:1 recently:1 superior:3 common:2 empirically:2 overview:2 m1:1 elementwise:1 interpret:1 significant:3 measurement:21 cup:1 imposing:1 ai:3 similarly:1 wt2:8 isometry:2 recent:5 optimizing:1 inf:1 certain:2 nonconvex:3 verlag:1 inequality:6 arbitrarily:1 success:1 minimum:3 additional:1 somewhat:1 impose:2 determine:2 converge:1 period:2 signal:5 july:1 reduces:8 cross:3 a1:1 impact:1 prediction:1 involving:1 regression:1 essentially:1 sometimes:1 represent:1 agarwal:1 dec:3 c1:3 addition:1 separately:1 spikiness:2 source:1 rest:1 effectiveness:1 integer:1 near:2 bernstein:2 split:1 easy:1 variety:4 affect:1 nrl:1 architecture:1 restrict:1 suboptimal:1 inner:1 penalty:1 reformulated:1 proceed:1 etk:1 wootters:2 dramatically:2 involve:1 amount:2 locally:4 reduced:2 generate:2 xij:2 canonical:1 problematic:1 notice:1 nsf:1 broadly:1 discrete:1 write:1 express:1 key:1 changing:3 pj:1 ht:1 relaxation:2 fraction:3 year:3 throughout:3 reasonable:2 separation:3 decision:1 scaling:1 comparable:2 bit:1 bound:25 guaranteed:2 simplification:1 koren:3 fold:1 quadratic:3 smaragdis:1 oracle:1 nonnegative:1 constraint:15 infinity:1 boxplot:1 personalized:1 sake:2 generates:1 aspect:1 argument:3 min:9 span:1 relatively:5 department:1 according:4 combination:1 smaller:1 evolves:2 rev:1 den:1 restricted:3 computationally:1 equation:1 remains:1 eventually:1 loose:1 know:1 end:1 studying:1 generalizes:1 pursuit:1 observe:3 enforce:1 appropriate:2 alternative:1 robustness:1 assumes:1 remaining:3 denotes:1 top:1 log2:2 music:2 exploit:1 build:1 establish:3 classical:2 comparatively:1 klopp:1 objective:3 quantity:1 occurs:2 parametric:1 strategy:2 dependence:2 md:1 nr:1 surrogate:1 hai:1 concentration:2 gradient:1 cmmi:1 subspace:3 berlin:1 topic:1 enforcing:1 minimizing:1 mexico:1 equivalently:1 holding:2 negative:2 stated:1 unknown:1 perform:2 allowing:1 upper:3 observation:11 markov:1 acknowledge:1 finite:1 descent:1 truncated:3 precise:1 rn:4 perturbation:13 drift:1 rating:9 pair:2 required:5 paris:1 c3:3 tremendous:1 established:2 barcelona:2 nip:3 alternately:1 address:1 dth:1 trans:3 below:1 usually:1 program:7 including:2 max:4 wainwright:3 power:3 improve:1 movie:4 technology:1 dc1:1 incoherent:3 philadelphia:1 prior:2 understanding:3 literature:1 discovery:1 kf:2 vancouver:1 relative:3 afosr:1 loss:1 expect:2 cdc:1 limitation:1 proportional:1 filtering:1 validation:8 sufficient:1 kxkp:1 pi:1 maxt:1 row:2 supported:1 weaker:1 institute:1 wide:2 absolute:1 sparse:1 benefit:2 distributed:2 van:2 dimension:2 world:4 ignores:1 made:1 san:1 simplified:1 far:2 emphasize:1 keep:2 global:2 b1:2 xi:2 alternatively:1 factorizing:1 latent:3 quantifies:1 additionally:2 ku:1 reasonably:1 ca:3 ignoring:2 heidelberg:1 complex:2 main:1 montanari:1 weimer:1 rh:3 noise:23 arise:1 n2:30 xu:2 referred:1 georgia:1 hassibi:1 wish:1 comput:2 kxk2:1 weighting:1 theorem:13 formula:1 sensing:15 maxi:2 dominates:4 incorporating:3 scatterplot:1 dm0:1 cancun:1 illustrates:2 kx:1 gap:1 led:1 simply:3 likely:1 lagrange:1 kxk:3 recommendation:7 springer:1 corresponds:1 determines:1 satisfies:3 acm:3 oct:2 goal:3 viewed:1 ann:2 towards:1 replace:1 change:3 diminished:1 typical:1 uniformly:1 reducing:1 wt:20 total:1 geer:1 experimental:4 svd:1 select:2 uneven:1 berg:1 mark:1 incorporate:1 d1:6 |
6,037 | 6,461 | Learning to learn by gradient descent
by gradient descent
Marcin Andrychowicz1 , Misha Denil1 , Sergio G?mez Colmenarejo1 , Matthew W. Hoffman1 ,
David Pfau1 , Tom Schaul1 , Brendan Shillingford1,2 , Nando de Freitas1,2,3
1
Google DeepMind
2
University of Oxford
3
Canadian Institute for Advanced Research
marcin.andrychowicz@gmail.com
{mdenil,sergomez,mwhoffman,pfau,schaul}@google.com
brendan.shillingford@cs.ox.ac.uk, nandodefreitas@google.com
Abstract
The move from hand-designed features to learned features in machine learning has
been wildly successful. In spite of this, optimization algorithms are still designed
by hand. In this paper we show how the design of an optimization algorithm can be
cast as a learning problem, allowing the algorithm to learn to exploit structure in
the problems of interest in an automatic way. Our learned algorithms, implemented
by LSTMs, outperform generic, hand-designed competitors on the tasks for which
they are trained, and also generalize well to new tasks with similar structure. We
demonstrate this on a number of tasks, including simple convex problems, training
neural networks, and styling images with neural art.
1
Introduction
Frequently, tasks in machine learning can be expressed as the problem of optimizing an objective
function f (?) defined over some domain ? 2 ?. The goal in this case is to find the minimizer
?? = arg min?2? f (?). While any method capable of minimizing this objective function can be
applied, the standard approach for differentiable functions is some form of gradient descent, resulting
in a sequence of updates
?t+1 = ?t
?t rf (?t ) .
The performance of vanilla gradient descent, however, is hampered by the fact that it only makes use
of gradients and ignores second-order information. Classical optimization techniques correct this
behavior by rescaling the gradient step using curvature information, typically via the Hessian matrix
of second-order partial derivatives?although other choices such as the generalized Gauss-Newton
matrix or Fisher information matrix are possible.
Much of the modern work in optimization is based around designing update rules tailored to specific
classes of problems, with the types of problems of interest differing between different research
communities. For example, in the deep learning community we have seen a proliferation of optimization methods specialized for high-dimensional, non-convex optimization problems. These include
momentum [Nesterov, 1983, Tseng, 1998], Rprop [Riedmiller and Braun, 1993], Adagrad [Duchi
et al., 2011], RMSprop [Tieleman and Hinton, 2012], and ADAM [Kingma and Ba, 2015]. More
focused methods can also be applied when more structure of the optimization problem is known
[Martens and Grosse, 2015]. In contrast, communities who focus on sparsity tend to favor very
different approaches [Donoho, 2006, Bach et al., 2012]. This is even more the case for combinatorial
optimization for which relaxations are often the norm [Nemhauser and Wolsey, 1988].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This industry of optimizer design allows differameter update
s
par
ent communities to create optimization methods which exploit structure in their problems
of interest at the expense of potentially poor
performance on problems outside of that scope.
Moreover the No Free Lunch Theorems for Optimization [Wolpert and Macready, 1997] show
that in the setting of combinatorial optimization,
no algorithm is able to do better than a random
optimizer
optimizee
strategy in expectation. This suggests that speerror signal
cialization to a subclass of problems is in fact
the only way that improved performance can be
achieved in general.
Figure 1: The optimizer (left) is provided with
In this work we take a different tack and instead performance of the optimizee (right) and proposes
propose to replace hand-designed update rules updates to increase the optimizee?s performance.
with a learned update rule, which we call the op- [photos: Bobolas, 2009, Maley, 2011]
timizer g, specified by its own set of parameters
. This results in updates to the optimizee f of
the form
?t+1 = ?t + gt (rf (?t ), ) .
(1)
A high level view of this process is shown in Figure 1. In what follows we will explicitly model
the update rule g using a recurrent neural network (RNN) which maintains its own state and hence
dynamically updates as a function of its iterates.
1.1
Transfer learning and generalization
The goal of this work is to develop a procedure for constructing a learning algorithm which performs
well on a particular class of optimization problems. Casting algorithm design as a learning problem
allows us to specify the class of problems we are interested in through example problem instances.
This is in contrast to the ordinary approach of characterizing properties of interesting problems
analytically and using these analytical insights to design learning algorithms by hand.
It is informative to consider the meaning of generalization in this framework. In ordinary statistical
learning we have a particular function of interest, whose behavior is constrained through a data set of
example function evaluations. In choosing a model we specify a set of inductive biases about how
we think the function of interest should behave at points we have not observed, and generalization
corresponds to the capacity to make predictions about the behavior of the target function at novel
points. In our setting the examples are themselves problem instances, which means generalization
corresponds to the ability to transfer knowledge between different problems. This reuse of problem
structure is commonly known as transfer learning, and is often treated as a subject in its own right.
However, by taking a meta-learning perspective, we can cast the problem of transfer learning as one
of generalization, which is much better studied in the machine learning community.
One of the great success stories of deep-learning is that we can rely on the ability of deep networks to
generalize to new examples by learning interesting sub-structures. In this work we aim to leverage
this generalization power, but also to lift it from simple supervised learning to the more general
setting of optimization.
1.2
A brief history and related work
The idea of using learning to learn or meta-learning to acquire knowledge or inductive biases has a
long history [Thrun and Pratt, 1998]. More recently, Lake et al. [2016] have argued forcefully for
its importance as a building block in artificial intelligence. Similarly, Santoro et al. [2016] frame
multi-task learning as generalization, however unlike our approach they directly train a base learner
rather than a training algorithm. In general these ideas involve learning which occurs at two different
time scales: rapid learning within tasks and more gradual, meta learning across many different tasks.
Perhaps the most general approach to meta-learning is that of Schmidhuber [1992, 1993]?building
on work from [Schmidhuber, 1987]?which considers networks that are able to modify their own
weights. Such a system is differentiable end-to-end, allowing both the network and the learning
2
algorithm to be trained jointly by gradient descent with few restrictions. However this generality
comes at the expense of making the learning rules very difficult to train. Alternatively, the work
of Schmidhuber et al. [1997] uses the Success Story Algorithm to modify its search strategy rather
than gradient descent; a similar approach has been recently taken in Daniel et al. [2016] which uses
reinforcement learning to train a controller for selecting step-sizes.
Bengio et al. [1990, 1995] propose to learn updates which avoid back-propagation by using simple
parametric rules. In relation to the focus of this paper the work of Bengio et al. could be characterized
as learning to learn without gradient descent by gradient descent. The work of Runarsson and
Jonsson [2000] builds upon this work by replacing the simple rule with a neural network.
Cotter and Conwell [1990], and later Younger et al. [1999], also show fixed-weight recurrent neural
networks can exhibit dynamic behavior without need to modify their network weights. Similarly this
has been shown in a filtering context [e.g. Feldkamp and Puskorius, 1998], which is directly related
to simple multi-timescale optimizers [Sutton, 1992, Schraudolph, 1999].
Finally, the work of Younger et al. [2001] and Hochreiter et al. [2001] connects these different threads
of research by allowing for the output of backpropagation from one network to feed into an additional
learning network, with both networks trained jointly. Our approach to meta-learning builds on this
work by modifying the network architecture of the optimizer in order to scale this approach to larger
neural-network optimization problems.
2
Learning to learn with recurrent neural networks
In this work we consider directly parameterizing the optimizer. As a result, in a slight abuse of notation
we will write the final optimizee parameters ?? (f, ) as a function of the optimizer parameters and
the function in question. We can then ask the question: What does it mean for an optimizer to be
good? Given a distribution of functions f we will write the expected loss as
h
i
L( ) = Ef f ?? (f, ) .
(2)
As noted earlier, we will take the update steps gt to be the output of a recurrent neural network m,
parameterized by , whose state we will denote explicitly with ht . Next, while the objective function
in (2) depends only on the final parameter value, for training the optimizer it will be convenient to
have an objective that depends on the entire trajectory of optimization, for some horizon T,
L( ) = Ef
"
T
X
t=1
wt f (?t )
#
where
?
?t+1 = ?t + gt ,
(3)
gt
= m(rt , ht , ) .
ht+1
Here wt 2 R 0 are arbitrary weights associated with each time-step and we will also use the notation
rt = r? f (?t ). This formulation is equivalent to (2) when wt = 1[t = T ], but later we will describe
why using different weights can prove useful.
We can minimize the value of L( ) using gradient descent on . The gradient estimate @L( )/@ can
be computed by sampling a random function f and applying backpropagation to the computational
graph in Figure 2. We allow gradients to flow along the solid edges in the graph, but gradients
along the dashed edges are dropped. Ignoring gradients along the dashed edges amounts to making
the assumption that the gradients of the optimizee do not depend on the optimizer parameters, i.e.
@rt @ = 0. This assumption allows us to avoid computing second derivatives of f .
Examining the objective in (3) we see that the gradient is non-zero only for terms where wt 6= 0. If
we use wt = 1[t = T ] to match the original problem, then gradients of trajectory prefixes are zero
and only the final optimization step provides information for training the optimizer. This renders
Backpropagation Through Time (BPTT) inefficient. We solve this problem by relaxing the objective
such that wt > 0 at intermediate points along the trajectory. This changes the objective function, but
allows us to train the optimizer on partial trajectories. For simplicity, in all our experiments we use
wt = 1 for every t.
3
t-2
ft-2
Optimizee
?t-2
t-1
ft-1
?t-1
+
ht-2
gt
?t-1
m
?t+1
+
gt-1
?t-2
Optimizer
?t
+
gt-2
t
ft
?t
m
ht-1
m
ht
ht+1
Figure 2: Computational graph used for computing the gradient of the optimizer.
2.1
Coordinatewise LSTM optimizer
One challenge in applying RNNs in our setting is that we want to be able to optimize at least tens of
thousands of parameters. Optimizing at this scale with a fully connected RNN is not feasible as it
would require a huge hidden state and an enormous number of parameters. To avoid this difficulty we
will use an optimizer m which operates coordinatewise on the parameters of the objective function,
similar to other common update rules like RMSprop and ADAM. This coordinatewise network
architecture allows us to use a very small network that only looks at a single coordinate to define the
optimizer and share optimizer parameters across different parameters of the optimizee.
Different behavior on each coordinate is achieved by using separate activations for each objective
function parameter. In addition to allowing us to use a small network for this optimizer, this setup has
the nice effect of making the optimizer invariant to the order of parameters in the network, since the
same update rule is used independently on each coordinate.
?1
?1
LSTMn
+
?
?n
LSTM1
?
?
?
?n
f
?
We implement the update rule for each coordinate using a two-layer Long Short Term Memory
(LSTM) network [Hochreiter and Schmidhuber,
1997], using the now-standard forget gate architecture. The network takes as input the optimizee gradient for a single coordinate as well
as the previous hidden state and outputs the update for the corresponding optimizee parameter.
We will refer to this architecture, illustrated in
Figure 3, as an LSTM optimizer.
+
The use of recurrence allows the LSTM to learn
dynamic update rules which integrate informa- Figure 3: One step of an LSTM optimizer. All
tion from the history of gradients, similar to LSTMs have shared parameters, but separate hidmomentum. This is known to have many desir- den states.
able properties in convex optimization [see e.g.
Nesterov, 1983] and in fact many recent learning procedures?such as ADAM?use momentum in
their updates.
Preprocessing and postprocessing Optimizer inputs and outputs can have very different magnitudes depending on the class of function being optimized, but neural networks usually work robustly
only for inputs and outputs which are neither very small nor very large. In practice rescaling inputs
and outputs of an LSTM optimizer using suitable constants (shared across all timesteps and functions
f ) is sufficient to avoid this problem. In Appendix A we propose a different method of preprocessing
inputs to the optimizer inputs which is more robust and gives slightly better performance.
4
Figure 4: Comparisons between learned and hand-crafted optimizers performance. Learned optimizers are shown with solid lines and hand-crafted optimizers are shown with dashed lines. Units for the
y axis in the MNIST plots are logits. Left: Performance of different optimizers on randomly sampled
10-dimensional quadratic functions. Center: the LSTM optimizer outperforms standard methods
training the base network on MNIST. Right: Learning curves for steps 100-200 by an optimizer
trained to optimize for 100 steps (continuation of center plot).
3
Experiments
In all experiments the trained optimizers use two-layer LSTMs with 20 hidden units in each layer.
Each optimizer is trained by minimizing Equation 3 using truncated BPTT as described in Section 2.
The minimization is performed using ADAM with a learning rate chosen by random search.
We use early stopping when training the optimizer in order to avoid overfitting the optimizer. After
each epoch (some fixed number of learning steps) we freeze the optimizer parameters and evaluate its
performance. We pick the best optimizer (according to the final validation loss) and report its average
performance on a number of freshly sampled test problems.
We compare our trained optimizers with standard optimizers used in Deep Learning: SGD, RMSprop,
ADAM, and Nesterov?s accelerated gradient (NAG). For each of these optimizer and each problem
we tuned the learning rate, and report results with the rate that gives the best final error for each
problem. When an optimizer has more parameters than just a learning rate (e.g. decay coefficients for
ADAM) we use the default values from the optim package in Torch7. Initial values of all optimizee
parameters were sampled from an IID Gaussian distribution.
3.1
Quadratic functions
In this experiment we consider training an optimizer on a simple class of synthetic 10-dimensional
quadratic functions. In particular we consider minimizing functions of the form
f (?) = kW ?
yk22
for different 10x10 matrices W and 10-dimensional vectors y whose elements are drawn from an IID
Gaussian distribution. Optimizers were trained by optimizing random functions from this family and
tested on newly sampled functions from the same distribution. Each function was optimized for 100
steps and the trained optimizers were unrolled for 20 steps. We have not used any preprocessing, nor
postprocessing.
Learning curves for different optimizers, averaged over many functions, are shown in the left plot of
Figure 4. Each curve corresponds to the average performance of one optimization algorithm on many
test functions; the solid curve shows the learned optimizer performance and dashed curves show
the performance of the standard baseline optimizers. It is clear the learned optimizers substantially
outperform the baselines in this setting.
3.2
Training a small neural network on MNIST
In this experiment we test whether trainable optimizers can learn to optimize a small neural network
on MNIST, and also explore how the trained optimizers generalize to functions beyond those they
were trained on. To this end, we train the optimizer to optimize a base network and explore a series
of modifications to the network architecture and training procedure at test time.
5
Figure 5: Comparisons between learned and hand-crafted optimizers performance. Units for the
y axis are logits. Left: Generalization to the different number of hidden units (40 instead of 20).
Center: Generalization to the different number of hidden layers (2 instead of 1). This optimization
problem is very hard, because the hidden layers are very narrow. Right: Training curves for an MLP
with 20 hidden units using ReLU activations. The LSTM optimizer was trained on an MLP with
sigmoid activations.
Figure 6: Systematic study of final MNIST performance as the optimizee architecture is varied,
using sigmoid non-linearities. The vertical dashed line in the left-most plot denotes the architecture
at which the LSTM is trained and the horizontal line shows the final performance of the trained
optimizer in this setting.
In this setting the objective function f (?) is the cross entropy of a small MLP with parameters ?.
The values of f as well as the gradients @f (?)/@? are estimated using random minibatches of 128
examples. The base network is an MLP with one hidden layer of 20 units using a sigmoid activation
function. The only source of variability between different runs is the initial value ?0 and randomness
in minibatch selection. Each optimization was run for 100 steps and the trained optimizers were
unrolled for 20 steps. We used input preprocessing described in Appendix A and rescaled the outputs
of the LSTM by the factor 0.1.
Learning curves for the base network using different optimizers are displayed in the center plot of
Figure 4. In this experiment NAG, ADAM, and RMSprop exhibit roughly equivalent performance the
LSTM optimizer outperforms them by a significant margin. The right plot in Figure 4 compares the
performance of the LSTM optimizer if it is allowed to run for 200 steps, despite having been trained
to optimize for 100 steps. In this comparison we re-used the LSTM optimizer from the previous
experiment, and here we see that the LSTM optimizer continues to outperform the baseline optimizers
on this task.
Generalization to different architectures Figure 5 shows three examples of applying the LSTM
optimizer to train networks with different architectures than the base network on which it was trained.
The modifications are (from left to right) (1) an MLP with 40 hidden units instead of 20, (2) a
network with two hidden layers instead of one, and (3) a network using ReLU activations instead of
sigmoid. In the first two cases the LSTM optimizer generalizes well, and continues to outperform
the hand-designed baselines despite operating outside of its training regime. However, changing
the activation function to ReLU makes the dynamics of the learning procedure sufficiently different
that the learned optimizer is no longer able to generalize. Finally, in Figure 6 we show the results
of systematically varying the tested architecture; for the LSTM results we again used the optimizer
trained using 1 layer of 20 units and sigmoid non-linearities. Note that in this setting where the
6
Figure 7: Optimization performance on the CIFAR-10 dataset and subsets. Shown on the left is the
LSTM optimizer versus various baselines trained on CIFAR-10 and tested on a held-out test set. The
two plots on the right are the performance of these optimizers on subsets of the CIFAR labels. The
additional optimizer LSTM-sub has been trained only on the heldout labels and is hence transferring
to a completely novel dataset.
test-set problems are similar enough to those in the training set we see even better generalization than
the baseline optimizers.
3.3
Training a convolutional network on CIFAR-10
Next we test the performance of the trained neural optimizers on optimizing classification performance
for the CIFAR-10 dataset [Krizhevsky, 2009]. In these experiments we used a model with both
convolutional and feed-forward layers. In particular, the model used for these experiments includes
three convolutional layers with max pooling followed by a fully-connected layer with 32 hidden units;
all non-linearities were ReLU activations with batch normalization.
The coordinatewise network decomposition introduced in Section 2.1?and used in the previous
experiment?utilizes a single LSTM architecture with shared weights, but separate hidden states,
for each optimizee parameter. We found that this decomposition was not sufficient for the model
architecture introduced in this section due to the differences between the fully connected and convolutional layers. Instead we modify the optimizer by introducing two LSTMs: one proposes parameter
updates for the fully connected layers and the other updates the convolutional layer parameters. Like
the previous LSTM optimizer we still utilize a coordinatewise decomposition with shared weights
and individual hidden states, however LSTM weights are now shared only between parameters of the
same type (i.e. fully-connected vs. convolutional).
The performance of this trained optimizer compared against the baseline techniques is shown in
Figure 7. The left-most plot displays the results of using the optimizer to fit a classifier on a held-out
test set. The additional two plots on the right display the performance of the trained optimizer on
modified datasets which only contain a subset of the labels, i.e. the CIFAR-2 dataset only contains
data corresponding to 2 of the 10 labels. Additionally we include an optimizer LSTM-sub which was
only trained on the held-out labels.
In all these examples we can see that the LSTM optimizer learns much more quickly than the baseline
optimizers, with significant boosts in performance for the CIFAR-5 and especially CIFAR-2 datsets.
We also see that the optimizers trained only on a disjoint subset of the data is hardly effected by this
difference and transfers well to the additional dataset.
3.4
Neural Art
The recent work on artistic style transfer using convolutional networks, or Neural Art [Gatys et al.,
2015], gives a natural testbed for our method, since each content and style image pair gives rise to a
different optimization problem. Each Neural Art problem starts from a content image, c, and a style
image, s, and is given by
f (?) = ?Lcontent (c, ?) + Lstyle (s, ?) + Lreg (?)
The minimizer of f is the styled image. The first two terms try to match the content and style of
the styled image to that of their first argument, and the third term is a regularizer that encourages
smoothness in the styled image. Details can be found in [Gatys et al., 2015].
7
Figure 8: Optimization curves for Neural Art. Content images come from the test set, which was not
used during the LSTM optimizer training. Note: the y-axis is in log scale and we zoom in on the
interesting portion of this plot. Left: Applying the training style at the training resolution. Right:
Applying the test style at double the training resolution.
Figure 9: Examples of images styled using the LSTM optimizer. Each triple consists of the content
image (left), style (right) and image generated by the LSTM optimizer (center). Left: The result of
applying the training style at the training resolution to a test image. Right: The result of applying a
new style to a test image at double the resolution on which the optimizer was trained.
We train optimizers using only 1 style and 1800 content images taken from ImageNet [Deng et al.,
2009]. We randomly select 100 content images for testing and 20 content images for validation of
trained optimizers. We train the optimizer on 64x64 content images from ImageNet and one fixed
style image. We then test how well it generalizes to a different style image and higher resolution
(128x128). Each image was optimized for 128 steps and trained optimizers were unrolled for 32
steps. Figure 9 shows the result of styling two different images using the LSTM optimizer. The
LSTM optimizer uses inputs preprocessing described in Appendix A and no postprocessing. See
Appendix C for additional images.
Figure 8 compares the performance of the LSTM optimizer to standard optimization algorithms. The
LSTM optimizer outperforms all standard optimizers if the resolution and style image are the same
as the ones on which it was trained. Moreover, it continues to perform very well when both the
resolution and style are changed at test time.
Finally, in Appendix B we qualitatively examine the behavior of the step directions generated by the
learned optimizer.
4
Conclusion
We have shown how to cast the design of optimization algorithms as a learning problem, which
enables us to train optimizers that are specialized to particular classes of functions. Our experiments
have confirmed that learned neural optimizers compare favorably against state-of-the-art optimization
methods used in deep learning. We witnessed a remarkable degree of transfer, with for example the
LSTM optimizer trained on 12,288 parameter neural art tasks being able to generalize to tasks with
49,152 parameters, different styles, and different content images all at the same time. We observed
similar impressive results when transferring to different architectures in the MNIST task.
The results on the CIFAR image labeling task show that the LSTM optimizers outperform handengineered optimizers when transferring to datasets drawn from the same data distribution.
References
F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations
and Trends in Machine Learning, 4(1):1?106, 2012.
8
S. Bengio, Y. Bengio, and J. Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters,
2(4):26?30, 1995.
Y. Bengio, S. Bengio, and J. Cloutier. Learning a synaptic learning rule. Universit? de Montr?al, D?partement
d?informatique et de recherche op?rationnelle, 1990.
F. Bobolas. brain-neurons, 2009. URL https://www.flickr.com/photos/fbobolas/3822222947. Creative Commons Attribution-ShareAlike 2.0 Generic.
N. E. Cotter and P. R. Conwell. Fixed-weight networks can learn. In International Joint Conference on Neural
Networks, pages 553?559, 1990.
C. Daniel, J. Taylor, and S. Nowozin. Learning step size controllers for robust neural network training. In
Association for the Advancement of Artificial Intelligence, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database.
In Computer Vision and Pattern Recognition, pages 248?255. IEEE, 2009.
D. L. Donoho. Compressed sensing. Transactions on Information Theory, 52(4):1289?1306, 2006.
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization.
Journal of Machine Learning Research, 12:2121?2159, 2011.
L. A. Feldkamp and G. V. Puskorius. A signal processing framework based on dynamic neural networks
with application to problems in adaptation, filtering, and classification. Proceedings of the IEEE, 86(11):
2259?2277, 1998.
L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv Report 1508.06576, 2015.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International
Conference on Artificial Neural Networks, pages 87?94. Springer, 2001.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning
Representations, 2015.
A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like
people. arXiv Report 1604.00289, 2016.
T. Maley. neuron, 2011. URL https://www.flickr.com/photos/taylortotz101/6280077898. Creative
Commons Attribution 2.0 Generic.
J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In
International Conference on Machine Learning, pages 2408?2417, 2015.
G. L. Nemhauser and L. A. Wolsey. Integer and combinatorial optimization. John Wiley & Sons, 1988.
Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet
Mathematics Doklady, volume 27, pages 372?376, 1983.
M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP
algorithm. In International Conference on Neural Networks, pages 586?591, 1993.
T. P. Runarsson and M. T. Jonsson. Evolution and design of distributed learning rules. In IEEE Symposium on
Combinations of Evolutionary Computation and Neural Networks, pages 59?63. IEEE, 2000.
A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented
neural networks. In International Conference on Machine Learning, 2016.
J. Schmidhuber. Evolutionary principles in self-referential learning; On learning how to learn: The meta-meta-...
hook. PhD thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks.
Neural Computation, 4(1):131?139, 1992.
J. Schmidhuber. A neural network that embeds its own meta-levels. In International Conference on Neural
Networks, pages 407?412. IEEE, 1993.
J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive levin
search, and incremental self-improvement. Machine Learning, 28(1):105?130, 1997.
N. N. Schraudolph. Local gain adaptation in stochastic gradient descent. In International Conference on
Artificial Neural Networks, volume 2, pages 569?574, 1999.
R. S. Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In Association for
the Advancement of Artificial Intelligence, pages 171?176, 1992.
S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 1998.
T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent
magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
P. Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. Journal
on Optimization, 8(2):506?531, 1998.
D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization. Transactions on Evolutionary
Computation, 1(1):67?82, 1997.
A. S. Younger, P. R. Conwell, and N. E. Cotter. Fixed-weight on-line learning. Transactions on Neural Networks,
10(2):272?283, 1999.
A. S. Younger, S. Hochreiter, and P. R. Conwell. Meta-learning with backpropagation. In International Joint
Conference on Neural Networks, 2001.
9
| 6461 |@word version:1 norm:1 bptt:2 gradual:1 decomposition:3 pick:1 sgd:1 solid:3 initial:2 series:1 contains:1 selecting:1 daniel:2 tuned:1 prefix:1 outperforms:3 com:5 optim:1 activation:7 gmail:1 john:1 informative:1 enables:1 designed:5 plot:10 update:19 v:1 intelligence:3 advancement:2 short:2 recherche:1 iterates:1 provides:1 nandodefreitas:1 x128:1 wierstra:1 along:4 direct:1 symposium:1 prove:1 consists:1 expected:1 rapid:1 behavior:6 proliferation:1 themselves:1 frequently:1 nor:2 multi:2 gatys:3 examine:1 brain:1 feldkamp:2 spain:1 provided:1 moreover:2 notation:2 linearity:3 runarsson:2 medium:1 what:2 substantially:1 deepmind:1 differing:1 every:1 subclass:1 braun:2 universit:1 classifier:1 k2:1 uk:1 doklady:1 unit:9 botvinick:1 control:1 dropped:1 local:1 modify:4 sutton:2 despite:2 oxford:1 abuse:1 rnns:1 studied:1 dynamically:1 suggests:1 relaxing:1 averaged:1 testing:1 practice:1 block:1 implement:1 backpropagation:5 optimizers:32 procedure:4 riedmiller:2 rnn:2 adapting:1 convenient:1 projection:1 spite:1 selection:1 context:1 applying:7 restriction:1 equivalent:2 optimize:5 marten:2 center:5 www:2 ecker:1 attribution:2 independently:1 convex:4 focused:1 resolution:7 simplicity:1 factored:1 rule:15 insight:1 parameterizing:1 x64:1 coordinate:5 target:1 programming:1 us:3 designing:1 element:1 trend:1 recognition:1 continues:3 database:1 observed:2 ft:3 thousand:1 wiering:1 connected:5 coursera:1 rescaled:1 rmsprop:5 nesterov:4 dynamic:5 trained:30 depend:1 solving:1 upon:1 learner:1 completely:1 joint:2 lcontent:1 various:1 regularizer:1 soviet:1 train:9 univ:1 informatique:1 fast:1 describe:1 artificial:5 labeling:1 lift:1 outside:2 choosing:1 whose:3 larger:1 solve:1 tested:3 compressed:1 favor:1 ability:2 think:2 jointly:2 timescale:1 final:7 online:1 sequence:1 differentiable:2 analytical:1 propose:3 adaptation:2 schaul:1 inducing:1 ent:1 convergence:1 double:2 adam:8 incremental:3 depending:1 recurrent:5 ac:1 develop:1 op:2 implemented:1 c:1 come:2 direction:1 correct:1 modifying:1 stochastic:3 nando:1 forcefully:1 argued:1 require:1 conwell:5 generalization:11 datsets:1 shillingford:1 around:1 sufficiently:1 great:1 scope:1 matthew:1 optimizer:66 early:1 combinatorial:3 label:5 create:1 cotter:3 minimization:1 gaussian:2 aim:1 modified:1 rather:2 avoid:5 varying:1 casting:1 focus:2 improvement:1 tech:1 contrast:2 brendan:2 baseline:8 roughly:1 stopping:1 typically:1 entire:1 santoro:2 transferring:3 hidden:13 relation:1 marcin:2 interested:1 arg:1 classification:2 proposes:2 art:7 constrained:1 having:1 sampling:1 kw:1 look:1 report:5 partement:1 few:1 modern:1 randomly:2 zoom:1 individual:1 connects:1 styled:4 montr:1 interest:5 huge:1 mlp:5 evaluation:1 misha:1 puskorius:2 held:3 edge:3 capable:1 partial:2 institut:1 taylor:1 divide:1 re:1 instance:2 industry:1 earlier:1 witnessed:1 ordinary:2 artistic:2 introducing:1 subset:4 krizhevsky:2 successful:1 levin:1 examining:1 rationnelle:1 synthetic:1 lstm:33 international:9 systematic:1 dong:1 bethge:1 quickly:1 again:1 thesis:1 zhao:1 derivative:2 rescaling:2 inefficient:1 style:16 li:2 ullman:1 de:3 includes:1 coefficient:1 explicitly:2 depends:2 later:2 view:1 tion:1 performed:1 try:1 hazan:1 portion:1 start:1 effected:1 maintains:1 minimize:1 convolutional:7 who:1 generalize:5 handengineered:1 iid:2 informatik:1 trajectory:4 confirmed:1 randomness:1 history:3 anns:1 flickr:2 synaptic:1 rprop:2 competitor:1 against:2 associated:1 sampled:4 newly:1 dataset:5 gain:1 ask:1 knowledge:2 lreg:1 jenatton:1 back:1 feed:2 higher:1 supervised:1 tom:1 specify:2 improved:1 formulation:1 ox:1 mez:1 wildly:1 generality:1 just:1 hand:9 horizontal:1 lstms:4 replacing:1 propagation:1 google:3 minibatch:1 perhaps:1 building:3 effect:1 lillicrap:1 contain:1 logits:2 inductive:3 hence:2 analytically:1 evolution:1 illustrated:1 during:1 self:2 recurrence:1 encourages:1 noted:1 generalized:1 demonstrate:1 duchi:2 performs:1 postprocessing:3 image:27 meaning:1 novel:2 recently:2 ef:2 common:3 sigmoid:5 specialized:2 volume:2 association:2 slight:1 refer:1 significant:2 freeze:1 smoothness:1 automatic:1 vanilla:1 mathematics:1 similarly:2 longer:1 operating:1 impressive:1 gt:7 base:6 sergio:1 curvature:2 own:5 recent:3 perspective:1 optimizing:5 schmidhuber:9 meta:10 success:3 seen:1 additional:5 deng:2 signal:2 dashed:5 multiple:1 x10:1 technical:1 match:2 characterized:1 faster:1 bach:2 long:3 schraudolph:2 cross:1 cifar:9 prediction:1 controller:2 vision:1 expectation:1 cloutier:2 arxiv:2 normalization:1 tailored:1 achieved:2 hochreiter:5 younger:5 addition:1 want:1 source:1 unlike:1 subject:1 tend:1 pooling:1 flow:1 call:1 integer:1 leverage:1 yk22:1 canadian:1 bengio:6 pratt:2 intermediate:1 enough:1 bartunov:1 relu:4 timesteps:1 fit:1 architecture:13 idea:2 thread:1 whether:1 url:2 reuse:1 torch7:1 penalty:1 render:1 hessian:1 hardly:1 andrychowicz:1 deep:5 useful:1 clear:1 involve:1 amount:1 referential:1 ten:1 tenenbaum:1 continuation:1 http:2 outperform:5 estimated:1 disjoint:1 delta:2 write:2 enormous:1 drawn:2 changing:1 neither:1 ht:7 utilize:1 graph:3 relaxation:1 subgradient:1 run:3 package:1 parameterized:1 letter:1 family:1 lake:2 utilizes:1 appendix:5 layer:15 followed:1 display:2 quadratic:3 kronecker:1 fei:2 argument:1 min:1 freitas1:1 according:1 creative:2 munich:1 combination:1 poor:1 across:3 slightly:1 son:1 lunch:2 making:3 modification:2 den:1 invariant:1 taken:2 equation:1 singer:1 end:3 photo:3 generalizes:2 hierarchical:1 generic:3 macready:2 robustly:1 stepsize:1 batch:1 alternative:1 gate:1 original:1 hampered:1 denotes:1 running:1 include:2 newton:1 exploit:2 build:2 especially:1 classical:1 move:1 objective:10 question:2 occurs:1 strategy:2 parametric:1 rt:3 exhibit:2 gradient:28 nemhauser:2 evolutionary:3 separate:3 thrun:2 capacity:1 considers:1 tseng:2 minimizing:3 acquire:1 unrolled:3 difficult:1 setup:1 potentially:1 expense:2 favorably:1 rise:1 ba:2 lstm1:1 design:6 perform:1 allowing:4 vertical:1 neuron:2 datasets:2 descent:12 behave:1 displayed:1 truncated:1 hinton:2 variability:1 frame:1 varied:1 arbitrary:1 jonsson:2 community:5 david:1 introduced:2 cast:3 pair:1 specified:1 optimized:3 imagenet:3 pfau:1 learned:11 narrow:1 testbed:1 kingma:2 barcelona:1 nip:1 boost:1 able:6 beyond:1 bar:1 usually:1 pattern:1 regime:1 sparsity:2 challenge:1 rf:2 including:1 memory:4 max:1 shifting:1 power:1 suitable:1 treated:1 rely:1 difficulty:1 natural:1 business:1 advanced:1 brief:1 hook:1 axis:3 nice:1 epoch:1 adagrad:1 loss:2 par:1 fully:5 heldout:1 lecture:1 interesting:3 wolsey:2 filtering:2 versus:1 remarkable:1 gershman:1 triple:1 validation:2 foundation:1 integrate:1 degree:1 sufficient:2 principle:1 story:3 systematically:1 nowozin:1 share:1 tiny:1 changed:1 free:2 bias:4 allow:1 institute:1 characterizing:1 taking:1 distributed:1 curve:8 default:1 ignores:1 forward:1 commonly:1 reinforcement:1 preprocessing:5 qualitatively:1 adaptive:4 transaction:3 approximate:1 overfitting:1 mairal:1 nag:2 alternatively:1 freshly:1 search:4 why:1 additionally:1 learn:13 transfer:7 robust:2 ignoring:1 constructing:1 domain:1 coordinatewise:5 allowed:1 augmented:1 crafted:3 grosse:2 wiley:1 embeds:1 sub:3 momentum:3 third:1 learns:1 theorem:2 specific:1 sensing:1 decay:1 socher:1 mnist:6 importance:1 phd:1 magnitude:2 horizon:1 margin:1 wolpert:2 forget:1 entropy:1 explore:2 expressed:1 springer:2 corresponds:3 minimizer:2 tieleman:2 informa:1 minibatches:1 obozinski:1 goal:2 donoho:2 replace:1 fisher:1 feasible:1 change:1 shared:5 hard:1 content:10 operates:1 wt:7 tack:1 gauss:1 select:1 people:1 accelerated:1 evaluate:1 trainable:1 |
6,038 | 6,462 | Solving Marginal MAP Problems with NP Oracles
and Parity Constraints
Yexiang Xue
Department of Computer Science
Cornell University
yexiang@cs.cornell.edu
Stefano Ermon
Department of Computer Science
Stanford University
ermon@cs.stanford.edu
Zhiyuan Li?
Institute of Interdisciplinary Information Sciences
Tsinghua University
lizhiyuan13@mails.tsinghua.edu.cn
Carla P. Gomes, Bart Selman
Department of Computer Science
Cornell University
{gomes,selman}@cs.cornell.edu
Abstract
Arising from many applications at the intersection of decision-making and machine
learning, Marginal Maximum A Posteriori (Marginal MAP) problems unify the
two main classes of inference, namely maximization (optimization) and marginal
inference (counting), and are believed to have higher complexity than both of
them. We propose XOR_MMAP, a novel approach to solve the Marginal MAP
problem, which represents the intractable counting subproblem with queries to
NP oracles, subject to additional parity constraints. XOR_MMAP provides a constant
factor approximation to the Marginal MAP problem, by encoding it as a single
optimization in a polynomial size of the original problem. We evaluate our approach
in several machine learning and decision-making applications, and show that our
approach outperforms several state-of-the-art Marginal MAP solvers.
1
Introduction
Typical inference queries to make predictions and learn probabilistic models from data include the
maximum a posteriori (MAP) inference task, which computes the most likely assignment of a set
of variables, as well as the marginal inference task, which computes the probability of an event
according to the model. Another common query is the Marginal MAP (MMAP) problem, which
involves both maximization (optimization over a set of variables) and marginal inference (averaging
over another set of variables).
Marginal MAP problems arise naturally in many machine learning applications. For example, learning
latent variable models can be formulated as a MMAP inference problem, where the goal is to optimize
over the model?s parameters while marginalizing all the hidden variables. MMAP problems also arise
naturally in the context of decision-making under uncertainty, where the goal is to find a decision
(optimization) that performs well on average across multiple probabilistic scenarios (averaging).
The Marginal MAP problem is known to be NPPP -complete [18], which is commonly believed to be
harder than both MAP inference (NP-hard) and marginal inference (#P-complete). As supporting
evidence, MMAP problems are NP-hard even on tree structured probabilistic graphical models
[13]. Aside from attempts to solve MMAP problems exactly [17, 15, 14, 16], previous approximate
approaches fall into two categories, in general. The core idea of approaches in both categories is
?
This research was done when Zhiyuan Li was an exchange student at Cornell University.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to effectively approximate the intractable marginalization, which often involves averaging over an
exponentially large number of scenarios. One class of approaches [13, 11, 19, 12] use variational
forms to represent the intractable sum. Then the entire problem can be solved with message passing
algorithms, which correspond to searching for the best variational approximation in an iterative
manner. As another family of approaches, Sample Average Approximation (SAA) [20, 21] uses a
fixed set of samples to represent the intractable sum, which then transforms the entire problem into
a restricted optimization, only considering a finite number of samples. Both approaches treat the
optimization and marginalizing components separately. However, we will show that by solving these
two tasks in an integrated manner, we can obtain significant computational benefits.
Ermon et al. [8, 9] recently proposed an alternative approach to approximate intractable counting
problems. Their key idea is a mechanism to transform a counting problem into a series of optimization
problems, each corresponding to the original problem subject to randomly generated XOR constraints.
Based on this mechanism, they developed an algorithm providing a constant-factor approximation to
the counting (marginalization) problem.
We propose a novel algorithm, called XOR_MMAP, which approximates the intractable sum with a
series of optimization problems, which in turn are folded into the global optimization task. Therefore,
we effectively reduce the original MMAP inference to a single joint optimization of polynomial size
of the original problem.
We show that XOR_MMAP provides a constant factor approximation to the Marginal MAP problem.
Our approach also provides upper and lower bounds on the final result. The quality of the bounds can
be improved incrementally with increased computational effort.
We evaluate our algorithm on unweighted SAT instances and on weighted Markov Random Field
models, comparing our algorithm with variational methods, as well as sample average approximation.
We also show the effectiveness of our algorithm on applications in computer vision with deep neural
networks and in computational sustainability. Our sustainability application shows how MMAP
problems are also found in scenarios of searching for optimal policy interventions to maximize the
outcomes of probabilistic models. As a first example, we consider a network design application to
maximize the spread of cascades [20], which include modeling animal movements or information
diffusion in social networks. In this setting, the marginals of a probabilistic decision model represent
the probabilities for a cascade to reach certain target states (averaging), and the overall network
design problem is to make optimal policy interventions on the network structure to maximize the
spread of the cascade (optimization). As a second example, in a crowdsourcing domain, probabilistic
models are used to model people?s behavior. The organizer would like to find an optimal incentive
mechanism (optimization) to steer people?s effort towards crucial tasks, taking into account the
probabilistic behavioral model (averaging) [22].
We show that XOR_MMAP is able to find considerably better solutions than those found by previous
methods, as well as provide tighter bounds.
2
Preliminaries
Problem Definition Let A = {0, 1}m be the set of all possible assignments to binary variables
a1 , . . . , am and X = {0, 1}n be the set of assignments to binary variables x1 , . . . , xn . Let w(x, a) :
X ? A ? R+ be a function that maps every assignment to a non-negative value. Typical queries over
a probabilistic model include the P
maximization task, which requires the computation of maxa?A w(a),
and the marginal inference task x?X w(x), which sums over X .
Arising naturally from many machine learning applications, the following Marginal Maximum A
Posteriori (Marginal MAP) problem is a joint inference task, which combines the two aforementioned
inference tasks:
X
max
w(x, a).
(1)
a?A
x?X
P
We consider the case where the counting problem x?X w(x, a) and the maximization problem
maxa?A #w(a) are defined over sets of exponential size, therefore both are intractable in general.
Counting by Hashing and Optimization Our approach is based on a recent theoretical result that
transforms a counting problem to a series of optimization problems [8, 9, 2, 1]. A family of functions
H = {h : {0, 1}n ? {0, 1}k } is said to be pairwise independent if the following two conditions
2
Algorithm 1: XOR_Binary(w : A ? X ? {0, 1}, a0 , k)
Sample function hk : X ? {0, 1}k from a pair-wise independent function family;
Query an NP Oracle on whether
W(a0 , hk ) = {x ? X : w(a0 , x) = 1, hk (x) = 0} is empty;
Return true if W(a0 , hk ) 6= ?, otherwise return false.
hold for any function h randomly chosen from the family H: (1) ?x ? {0, 1}n , the random variable
h(x) is uniformly distributed in {0, 1}k and (2) ?x1 , x2 ? {0, 1}n , x1 6= x2 , the random variables
h(x1 ) and h(x2 ) are independent.
We sample matrices A ? {0, 1}k?n and vector b ? {0, 1}k uniformly at random to form the
function family HA,b = {hA,b : hA,b (x) = Ax + b mod 2}. It is possible to show that HA,b
is pairwise independent [8, 9]. Notice that in this case, each function hA,b (x) = Ax + b mod 2
corresponds to k parity constraints. One useful way to think about pairwise independent functions
is to imagine them as functions that randomly project elements in {0, 1}n into 2k buckets. Define
Bh (g) = {x ? {0, 1}n : hA,b (x) = g} to be a ?bucket? that includes all elements in {0, 1}n whose
mapped value hA,b (x) is vector g (g ? {0, 1}k ). Intuitively, if we randomly sample a function hA,b
from a pairwise independent family, then we get the following: x ? {0, 1}n has an equal probability
to be in any bucket B(g), and the bucket locations of any two different elements x, y are independent.
3
3.1
XOR_MMAP Algorithm
Binary Case
We first solve the Marginal MAP problem for the binary case, in which the function w : A ? X ?
{0, 1} outputs either 0 or 1. We will extend the result to the weighted case in the next section.
Since a ? A often represent decision variables when MMAP problems are used in decision making,
we call a fixed assignment to vector a = a0 a ?solution strategy?. To simplify the notation, we
use W(a0 ) to represent the set {x ? X : w(a0 , x) = 1}, and use W(a0 , hk ) to represent the set
{x ? X : w(a0 , x) = 1 and hk (x) = 0}, in which hk is sampled from a pairwise independent
k
function family that
Pmaps X to {0, 1} . We write #w(a0 ) as shorthand for the count |{x ? X :
w(a0 , x) = 1}| = x?X w(a0 , x). Our algorithm depends on the following result:
Theorem 3.1. (Ermon et. al.[8]) For a fixed solution strategy a0 ? A,
? Suppose #w(a0 ) ? 2k0 , then for any k ? k0 , with probability 1 ?
XOR_Binary(w, a0 , k ? c)=true.
2c
(2c ?1)2 ,
Algorithm
? Suppose #w(a0 ) < 2k0 , then for any k ? k0 , with probability 1 ?
XOR_Binary(w, a0 , k + c)=false.
2c
(2c ?1)2 ,
Algorithm
To understand Theorem 3.1 intuitively, we can think of hk as a function that maps every element in
set W(a0 ) into 2k buckets. Because hk comes from a pairwise independent function family, each
element in W(a0 ) will have an equal probability to be in any one of the 2k buckets, and the buckets
in which any two elements end up are mutually independent. Suppose the count of solutions for a
fixed strategy #w(a0 ) is 2k0 , then with high probability, there will be at least one element located
in a randomly selected bucket if the number of buckets 2k is less than 2k0 . Otherwise, with high
probability there will be no element in a randomly selected bucket.
Theorem 3.1 provides us with a way to obtain a rough count on #w(a0 ) via a series of tests on
whether W(a0 , hk ) is empty, subject to extra parity functions hk . This transforms a counting problem
to a series of NP queries, which can also be thought of as optimization queries. This transformation
is extremely helpful for the Marginal MAP problem. As noted earlier, the main challenge for the
marginal MAP problem is the intractable sum embedded in the maximization. Nevertheless, the
whole problem can be re-written as a single optimization if the intractable sum can be approximated
well by solving an optimization problem over the same domain.
We therefore design Algorithm XOR_MMAP, which is able to provide a constant factor approximation
to the Marginal MAP problem. The whole algorithm is shown in Algorithm 3. In its main procedure
3
Algorithm 2: XOR_K(w : A ? X ? {0, 1}, k, T )
Sample T pair-wise independent hash functions
(1)
(2)
(T )
hk , hk , . . . , hk : X ? {0, 1}k ;
Query Oracle
T
X
max
a?A,x(i) ?X
w(a, x(i) )
(2)
i=1
(i)
hk (x(i) )
s.t.
= 0, i = 1, . . . , T.
Return true if the max value is larger than dT /2e,
otherwise return false.
Algorithm 3: XOR_MMAP(w : A ? X ?
{0, 1},n = log2 |X |,m = log2 |A|,T )
k = n;
while k > 0 do
if XOR_K(w, k, T ) then
Return 2k ;
end
k ? k ? 1;
end
Return 1;
XOR_K, the algorithm transforms the Marginal MAP problem into an optimization over the sum of T
replicates of the original function w. Here, x(i) ? X is a replicate of the original x, and w(a, x(i) ) is
the original function w but takes x(i) as one of the inputs. All replicates share common input a. In
addition, each replicate is subject to an independent set of parity constraints on x(i) . Theorem 3.2
states that XOR_MMAP provides a constant-factor approximation to the Marginal MAP problem:
Theorem 3.2. For T ? m ln 2+ln(n/?)
, with probability 1 ? ?, XOR_MMAP(w, log2 |X |, log2 |A|, T )
?? (c)
2c
outputs a 2 -approximation to the Marginal MAP problem: maxa?A #w(a). ?? (c) is a constant.
Let us first understand the theorem in an intuitive way. Without losing generality, suppose the
optimal value maxa?A #w(a) = 2k0 . Denote a? as the optimal solution, ie, #w(a? ) = 2k0 .
According to Theorem 3.1, the set W(a? , hk ) has a high probability to be non-empty, for any
function hk that contains k < k0 parity constraints. In this case, the optimization problem
(i)
maxx(i) ?X ,h(i) (x(i) )=0 w(a? , x(i) ) for one replicate x(i) almost always returns 1. Because hk
k
PT
(i = 1 . . . T ) are sampled independently, the sum i=1 w(a? , x(i) ) is likely to be larger than dT /2e,
since each term in the sum is likely to be 1 (under the fixed a? ). Furthermore, since XOR_K maximizes
this sum over all possible strategies a ? A, the sum it finds will be at least as good as the one attained
at a? , which is already over dT /2e. Therefore, we conclude that when k < k0 , XOR_K will return
true with high probability.
We can develop similar arguments to conclude that XOR_K will return false with high probability
when more than k0 XOR constraints are added. Notice that replications and an additional union bound
argument are necessary to establish the probabilistic guarantee in this case. As a counter-example,
suppose function w(x, a) = 1 if and only if x = a, otherwise w(x, a) = 0 (m = n in this case). If
we set the number of replicates T = 1, then XOR_K will almost always return 1 when k < n, which
suggests that there are 2n solutions to the MMAP problem. Nevertheless, in this case the true optimal
value of maxx #w(x, a) is 1, which is far away from 2n . This suggests that at least two replicates
are needed.
Lemma 3.3. For T ? ln 2?m+ln(n/?)
, procedure XOR_K(w,k) satisfies:
?? (c)
? Suppose ?a? ? A, s.t. #w(a? ) ? 2k , then with probability 1 ?
returns true.
?
n2m ,
XOR_K(w, k ? c, T )
?
n,
XOR_K(w, k + c, T )
? Suppose ?a0 ? A, s.t. #w(a0 ) < 2k , then with probability 1 ?
returns false.
Proof. Claim 1: If there exists such a? satisfying #w(a? ) ? 2k , pick a0 = a? . Let X (i) (a0 ) =
maxx(i) ?X ,h(i) (x(i) )=0 w(a0 , x(i) ), for i = 1 . . . , T . From Theorem 3.1, X (i) (a0 ) = 1 holds with
k?c
probability 1 ?
"
Pr max
a?A
where
D
2c
(2c ?1)2 .
T
X
c
Let ?? (c) = D( 12 k (2c2?1)2 ). By Chernoff bound, we have
#
X
(i)
(a) ? T /2 ? Pr
i=1
1
2c
k c
2 (2 ? 1)2
"
T
X
#
X
(i)
(a0 ) ? T /2 ? e
1k
?D( 2
2c
(2c ?1)2
)T
= e??
?
(c)T
,
i=1
= 2 ln(2c ? 1) ? ln 2 ?
1
1
c
ln(2c ) ? ln((2c ? 1)2 ? 2c ) ? ( ? 2) ln 2.
2
2
2
4
(3)
?
For T ? ln 2?m+ln(n/?)
, we have e?? (c)T ? n2?m . Thus, with probability 1 ?
?? (c)
PT
max i=1 X (i) (a) > T /2, which implies that XOR_K(w, k ? c, T ) returns true.
?
n2m ,
we have
a?A
Claim 2: The proof is almost the same as Claim 1, except that we need to use a union bound to let
the property hold for all a ? A simultaneously. As a result, the success probability will be 1 ? n?
instead of 1 ? n2?m . The proof is left to supplementary materials.
Proof. (Theorem 3.2) With probability 1 ? n n? = 1 ? ?, the output of n calls of XOR_K(w, k, T )
(with different k = 1 . . . n) all satisfy the two claims in Lemma 3.3 simultaneously. Suppose
max #w(a) ? [2k0 , 2k0 +1 ), we have (i) ?k ? k0 + c + 1, XOR_K(w, k, T ) returns false, (ii)
a?A
?k ? k0 ? c, XOR_K(w, k, T ) returns true. Therefore, with probability 1 ? ?, the output of
XOR_MMAP is guaranteed to be among 2k0 ?c and 2k0 +c .
The approximation bound in Theorem 3.2 is a worst-case guarantee. We can obtain a tight bound (e.g.
16-approx) with a large number of T replicates. Nevertheless, we keep a small T , therefore a loose
bound, in our experiments, after trading between the formal guarantee and the empirical complexity.
In practice, our method performs well, even with loose bounds. Moreover, XOR_K procedures with
different input k are not uniformly hard. We therefore can run them in parallel. We can obtain a looser
bound at any given time, based on all completed XOR_K procedures. Finally, if we have access to a
polynomial approximation algorithm for the optimization problem in XOR_K, we can propagate this
bound through the analysis, and again get a guaranteed bound, albeit looser for the MMAP problem.
Reduce the Number of Replicates We further develop a few variants of XOR_MMAP in the supplementary materials to reduce the number of replicates, as well as the number of calls to the XOR_K
procedure, while preserving the same approximation bound.
Implementation We solve the optimization problem in XOR_K using Mixed Integer Programming
(MIP). Without losing generality, we assume w(a, x) is an indicator variable, which is 1 iff (a, x)
satisfies constraints represented
in Conjunctive Normal Form (CNF). We introduce extra variables
P
to represent the sum i w(a, x(i) ) which is left in the supplementary materials. The XORs in
Equation 2 are encoded as MIP constraints using the Yannakakis encoding, similar as in [7].
3.2
Extension to the Weighted Case
In this section, we study the more general case, where w(a, x) takes non-negative real numbers
instead of integers in {0, 1}. Unlike in [8], we choose to build our proof from the unweighted case
because it can effectively avoid modeling the median of an array of numbers [6], which is difficult
to encode in integer programming. We noticed recent work [4]. It is related but different from our
approach. Let w : A ? X ? R+ , and M = maxa,x w(a, x).
Definition 3.4. We define the embedding Sa (w, l) of X in X ? {0, 1}l as:
w(a, x)
2i?1
Sa (w, l) = (x, y)|?1 ? i ? l,
? l ? yi = 0 .
M
2
(4)
Lemma 3.5. Let wl0 (a, x, y) be an indicator variable which is 1 if and only if (x, y) is in Sa (w, l),
i.e., wl0 (a, x, y) = 1(x,y)?Sa (w,l) . We claim that
max
a
X
x
w(a, x) ?
X
X
M
max
wl0 (a, x, y) ? 2 max
w(a, x) + M 2n?l .2
l
a
2 a
x
(5)
(x,y)
Proof. Define Sa (w, l, x0 ) as the set of (x, y) pairs within the set Sa (w,P
l) and x = x0 , ie,
Sa (w, l, x0 ) = {(x, y) ? Sa (w, l) : x = x0 }. It is not hard to see that (x,y) wl0 (a, x, y) =
P
between
x |Sa (w, l, x)|. In the following, first we are going to establish the relationship
P
|Sa (w, l, x)| and w(a, x). Then we use the result to show the relationship between x |Sa (w, l, x)|
2
If w satisfy the property that mina,x w(a, x) ? 2?l?1 M , we don?t have the M 2n?l term.
5
P
2i?1 <
and x w(x, a). Case (i): If w(a, x) is sandwiched between two exponential levels: M
2l
M i
w(a, x) ? 2l 2 for i ? {0, 1, . . . , l}, according to Definition 3.4, for any (x, y) ? Sa (w, l, x), we
have yi+1 = yi+2 = . . . = yl = 0. This makes |Sa (w, l, x)| = 2i , which further implies that
M |Sa (w, l, x)|
M
?
< w(a, x) ? l ? |Sa (w, l, x)|,
2l
2
2
(6)
or equivalently,
M
? |Sa (w, l, x)| < 2w(a, x).
2l
Case (ii): If w(a, x) ? 2M
l+1 , we have |Sa (w, l, x)| = 1. In other words,
w(a, x) ?
w(a, x) ? 2w(a, x) ? 2
(7)
M
M
|Sa (w, l, x)| = l |Sa (w, l, x)|.
2l+1
2
(8)
Also, M 2?l |Sa (w, l, x)| = M 2?l ? 2w(a, x) + M 2?l . Hence, the following bound holds in both
cases (i) and (ii):
M
(9)
w(a, x) ? l |Sa (w, l, x)| ? 2w(a, x) + M 2?l .
2
The lemma holds by summing up over X and maximizing over A on all sides of Inequality 9.
With the result of Lemma 3.5, we are ready to prove the following approximation result:
there is an algorithm that gives a c-approximation to solve the unweighted
Theorem 3.6. Suppose
P
problem: maxa (x,y) wl0 (a, x, y), then we have a 3c-approximation algorithm to solve the weighted
P
Marginal MAP problem maxa x w(a, x).
Proof. Let l = n in Lemma 3.5. By definition M = maxa,x w(a, x) ? maxa
max
a
X
w(a, x) ?
x
M
max
2l a
X
wl0 (a, x, y) ? 2 max
a
(x,y)
X
P
x
w(a, x) + M ? 3 max
a
x
w(a, x), we have:
X
w(a, x).
x
This is equivalent to:
X 0
X
X 0
M
1 M
w(a, x) ? l max
? l max
wl (a, x, y) ? max
wl (a, x, y).
a
a
a
3 2
2
x
(x,y)
4
(x,y)
Experiments
We evaluate our proposed algorithm XOR_MMAP against two baselines ? the Sample Average Approximation (SAA) [20] and the Mixed Loopy Belief Propagation (Mixed LBP) [13]. These two
baselines are selected to represent the two most widely used classes of methods that approximate the
embedded sum in MMAP problems in two different ways. SAA approximates the intractable sum
with a finite number of samples, while the Mixed LBP uses a variational approximation. We obtained
the Mixed LBP implementation from the author of [13] and we use their default parameter settings.
Since Marginal MAP problems are in general very hard and there is currently no exact solver that
scales to reasonably large instances, our main comparison is on the relative optimality gap: we first
obtain
P the solution amethod for each
P approach. Then we compare the difference in objective function
log x?X w(amethod , x) ? log x?X w(abest , x), in which abest is the best solution among the
three methods. Clearly a better algorithm will find a vector a which yields a larger objective function.
The counting problem under a fixed solution a is solved using an exact counter ACE [5], which is
only used for comparing the results of different MMAP solvers.
Our first experiment is on unweighted random 2-SAT instances. Here, w(a, x) is an indicator variable
on whether the 2-SAT instance is satisfiable. The SAT instances have 60 variables, 20 of which are
randomly selected to form set A, and the remaining ones form set X . The number of clauses varies
from 1 to 70. For a fixed number of clauses, we P
randomly generate 20 instances, and the left panel of
Figure 1 shows the median objective function x?X w(amethod , x) of the solutions found by the
three approaches. We tune the constants of our XOR_MMAP so it gives a 210 = 1024-approximation
(2?5 ? sol ? OP T ? 25 ? sol, ? = 10?3 ). The upper and lower bounds are shown in dashed lines.
SAA uses 10,000 samples. On average, the running time of our algorithm is reasonable. When
6
40
upper bound
lower bound
MIXED_LBP
XOR_MMAP
SAA
30
20
10
0
% sol within 1/8 OPT
log of number of solutions
50
MIXED_LBP
XOR_MMAP
SAA
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
0
10
20 30 40 50 60
Number of clauses
70
0
10
20 30 40 50 60
Number of clauses
70
?4
XOR MMAP
SAA
MIXED LBP
?12
1.0
1.5
2.0
2.5
Coupling Strength
?20
3.0
3.5
?25
0.5
0
?4
?30
XOR MMAP
SAA
MIXED LBP
?15
?2
1.0
1.5
2.0
2.5
Coupling Strength
?8
XOR MMAP
SAA
MIXED LBP
?10
?12
1.0
1.5
2.0
2.5
Coupling Strength
?50
3.0
3.5
0
?5
?25
0.5
1.0
1.5
2.0
2.5
Coupling Strength
3.0
3.5
3.0
3.5
0
?20
?30
XOR MMAP
SAA
MIXED LBP
?20
3.5
0.5
?10
?15
3.0
XOR MMAP
SAA
MIXED LBP
?40
?10
?6
?14
0.5
?20
log #w(amethod) ? log #w(abest)
?8
0
?10
?5
?10
?6
?14
0.5
0
log #w(amethod) ? log #w(abest)
?2
?10
log #w(amethod) ? log #w(abest)
log #w(amethod) ? log #w(abest)
0
log #w(amethod) ? log #w(abest)
log #w(amethod) ? log #w(abest)
Figure 1: (Left) On
P median case, the solutions a0 found by the proposed Algorithm XOR_MMAP have
higher objective x?X w(a0 , x) than the solutions found by SAA and Mixed LBP, on random 2-SAT
instances with 60 variables and various number of clauses. Dashed lines represent the proved bounds
from XOR_MMAP. (Right) The percentage of instances that each algorithm can find a solution that is at
least 1/8 value of the best solutions among 3 algorithms, with different number of clauses.
1.0
1.5
2.0
2.5
Coupling Strength
XOR MMAP
SAA
MIXED LBP
?40
?50
3.0
3.5
0.5
1.0
1.5
2.0
2.5
Coupling Strength
Figure 2: On median case, the solutions a0 found by the proposed Algorithm XOR_MMAP are better
than the solutions found by SAA and Mixed LBP, on weighted 12-by-12 Ising models with mixed
coupling strength. (Up) Field strength 0.01. (Down) Field strength 0.1. (Left) 20% variables are
randomly selected for maximization. (Mid) 50% for maximization. (Right) 80% for maximization.
enforcing the 1024-approximation bound, the median time for a single XOR_k procedure is in seconds,
although we occasionally have long runs (no more than 30-minute timeout).
As we can see from the left panel of Figure 1, both Mixed LBP and SAA match the performance
of our proposed XOR_MMAP on easy instances. However, as the number of clauses increases, their
performance quickly deteriorates. In fact, for instances with more than 20 (60) clauses, typically the
a vectors returned by Mixed LBP (SAA) do not yield non-zero solution values. Therefore we are not
able to plot their performance beyond the two values. At the same time, our algorithm XOR_MMAP can
still find a vector a yielding over 220 solutions on larger instances with more than 60 clauses, while
providing a 1024-approximation.
Next, we look at the performance of the three algorithms on weighted instances. Here, we set the
number of replicates T = 3 for our algorithm XOR_MMAP, and we repeatedly start the algorithm with
an increasing number of XOR constraints k, until it completes for all k or times out in an hour. For
SAA, we use 1,000 samples, which is the largest we can use within the memory limit. All algorithms
are given a one-hour time and a 4G memory limit.
The solutions found by XOR_MMAP are considerably better than the ones found by Mixed LBP and
SAA on weighted instances. Figure 2 shows the performance of the three algorithms on 12-by-12
Ising models with mixed coupling strength, different field strengths and number of variables to form
set A. All values in the figure are median values across 20 instances (in log10 ). In all 6 cases in
Figure 2, our algorithm XOR_MMAP is the best among the three approximate algorithms. In general,
the difference in performance increases as the coupling strength increases. These instances are
challenging for the state-of-the-art complete solvers. For example, the state-of-the-art exact solver
7
puv
t=T
t=2
v
u
T
S
Log2 Probability
t=1
?15
SAA
XOR_MMAP
?20
?25
?30
30
35
40
45
50
Budgets
55
60
Figure 3: (Left) The image completion task. Solvers are given digits of the upper part as shown in the
first row. Solvers need to complete the digits based on a two-layer deep belief network and the upper
part. (2nd Row) completion given by XOR_MMAP. (3rd Row) SAA. (4th Row) Mixed Loopy Belief
Propagation. (Middle) Graphical illustration of the network cascade problem. Red circles are nodes
to purchase. Lines represent cascade probabilities. See main text. (Right) Our XOR_MMAP performs
better than SAA on a set of network cascade benchmarks, with different budgets.
AOBB with mini-bucket heuristics and moment matching [14] runs out of 4G memory on 60% of
instances with 20% variables randomly selected as max variables. We also notice that the solution
found by our XOR_MMAP is already close to the ground-truth. On smaller 10-by-10 Ising models
which the exact AOBB solver can complete within the memory limit, the median difference between
the log10 count of the solutions found by XOR_MMAP and those found by the exact solver is 0.3, while
the differences between the solution values of XOR_MMAP against those of the Mixed BP or SAA are
on the order of 10.
We also apply the Marginal MAP solver to an image completion task. We first learn a two-layer deep
belief network [3, 10] from a 14-by-14 MNIST dataset. Then for a binary image that only contains
the upper part of a digit, we ask the solver to complete the lower part, based on the learned model.
This is a Marginal MAP task, since one needs to integrate over the states of the hidden variables, and
query the most likely states of the lower part of the image. Figure 3 shows the result of a few digits.
As we can see, SAA performs poorly. In most cases, it only manages to come up with a light dot for
all 10 different digits. Mixed Loopy Belief Propagation and our proposed XOR_MMAP perform well.
The good performance of Mixed LBP may be due to the fact that the weights on pairwise factors in
the learned deep belief network are not very combinatorial.
Finally, we consider an application that applies decision-making into machine learning models. This
network design application maximizes the spread of cascades in networks, which is important in
the domain of social networks and computational sustainability. In this application, we are given a
stochastic graph, in which the source node at time t = 0 is affected. For a node v at time t, it will
be affected if one of its ancestor nodes at time t ? 1 is affected, and the configuration of the edge
connecting the two nodes is ?on?. An edge connecting node u and v has probability pu,v to be turned
on. A node will not be affected if it is not purchased. Our goal is to purchase a set of nodes within a
finite budget, so as to maximize the probability that the target node is affected. We refer the reader to
[20] for more background knowledge. This application cannot be captured by graphical models due
to global constraints. Therefore, we are not able to run mixed LBP on this problem. We consider a
set of synthetic networks, and compare the performance of SAA and our XOR_MMAP with different
budgets. As we can see from the right panel of Figure 3, the nodes that our XOR_MMAP decides to
purchase result in higher probabilities of the target node being affected, compared to SAA. Each dot
in the figure is the median value over 30 networks generated in a similar way.
5
Conclusion
We propose XOR_MMAP, a novel constant approximation algorithm to solve the Marginal MAP
problem. Our approach represents the intractable counting subproblem with queries to NP oracles,
subject to additional parity constraints. In our algorithm, the entire problem can be solved by a
single optimization. We evaluate our approach on several machine learning and decision-making
applications. We are able to show that XOR_MMAP outperforms several state-of-the-art Marginal MAP
solvers. XOR_MMAP provides a new angle to solving the Marginal MAP problem, opening the door to
new research directions and applications in real world domains.
Acknowledgments
This research was supported by National Science Foundation (Awards #0832782, 1522054, 1059284,
1649208) and Future of Life Institute (Grant 2015-143902).
8
References
[1] Dimitris Achlioptas and Pei Jiang. Stochastic integration via error-correcting codes. In Proc. Uncertainty
in Artificial Intelligence, 2015.
[2] Vaishak Belle, Guy Van den Broeck, and Andrea Passerini. Hashing-based approximate probabilistic
inference in hybrid domains. In Proceedings of the 31st UAI Conference, 2015.
[3] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep
networks. In Advances in Neural Information Processing Systems 19, 2006.
[4] Supratik Chakraborty, Dror Fried, Kuldeep S. Meel, and Moshe Y. Vardi. From weighted to unweighted
model counting. In Proceedings of the 24th Interational Joint Conference on AI (IJCAI), 2015.
[5] Mark Chavira, Adnan Darwiche, and Manfred Jaeger. Compiling relational bayesian networks for exact
inference. Int. J. Approx. Reasoning, 2006.
[6] Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, and Bart Selman. Embed and project: Discrete
sampling with universal hashing. In Advances in Neural Information Processing Systems (NIPS), pages
2085?2093, 2013.
[7] Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, and Bart Selman. Optimization with parity constraints:
From binary codes to discrete integration. In Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence, UAI, 2013.
[8] Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, and Bart Selman. Taming the curse of dimensionality:
Discrete integration by hashing and optimization. In Proceedings of the 30th International Conference on
Machine Learning, ICML, 2013.
[9] Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, and Bart Selman. Low-density parity constraints
for hashing-based discrete integration. In Proceedings of the 31th International Conference on Machine
Learning, ICML, 2014.
[10] Geoffrey Hinton and Ruslan Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504 ? 507, 2006.
[11] Jiarong Jiang, Piyush Rai, and Hal Daum? III. Message-passing for approximate MAP inference with
latent variables. In Advances in Neural Information Processing Systems 24, 2011.
[12] Junkyu Lee, Radu Marinescu, Rina Dechter, and Alexander T. Ihler. From exact to anytime solutions for
marginal MAP. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI, 2016.
[13] Qiang Liu and Alexander T. Ihler. Variational algorithms for marginal MAP. Journal of Machine Learning
Research, 14, 2013.
[14] Radu Marinescu, Rina Dechter, and Alexander Ihler. Pushing forward marginal map with best-first search.
In Proceedings of the 24th International Conference on Artificial Intelligence (IJCAI), 2015.
[15] Radu Marinescu, Rina Dechter, and Alexander T. Ihler. AND/OR search for marginal MAP. In Proceedings
of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI, 2014.
[16] Denis Deratani Mau? and Cassio Polpo de Campos. Anytime marginal MAP inference. In Proceedings of
the 29th International Conference on Machine Learning, ICML, 2012.
[17] James D. Park and Adnan Darwiche. Solving map exactly using systematic search. In Proceedings of the
Nineteenth Conference on Uncertainty in Artificial Intelligence (UAI), 2003.
[18] James D. Park and Adnan Darwiche. Complexity results and approximation strategies for map explanations.
J. Artif. Int. Res., 2004.
[19] Wei Ping, Qiang Liu, and Alexander T. Ihler. Decomposition bounds for marginal MAP. In Advances in
Neural Information Processing Systems 28, 2015.
[20] Daniel Sheldon, Bistra N. Dilkina, Adam N. Elmachtoub, Ryan Finseth, Ashish Sabharwal, Jon Conrad,
Carla P. Gomes, David B. Shmoys, William Allen, Ole Amundsen, and William Vaughan. Maximizing the
spread of cascades using network design. In UAI, 2010.
[21] Shan Xue, Alan Fern, and Daniel Sheldon. Scheduling conservation designs for maximum flexibility via
network cascade optimization. J. Artif. Intell. Res. (JAIR), 2015.
[22] Yexiang Xue, Ian Davies, Daniel Fink, Christopher Wood, and Carla P. Gomes. Avicaching: A two
stage game for bias reduction in citizen science. In Proceedings of the 15th International Conference on
Autonomous Agents and Multiagent Systems (AAMAS), 2016.
9
| 6462 |@word middle:1 polynomial:3 chakraborty:1 replicate:3 nd:1 adnan:3 propagate:1 decomposition:1 pick:1 harder:1 reduction:1 moment:1 configuration:1 series:5 contains:2 liu:2 daniel:3 outperforms:2 comparing:2 conjunctive:1 written:1 dechter:3 plot:1 kuldeep:1 bart:5 aside:1 hash:1 selected:6 intelligence:6 greedy:1 fried:1 core:1 manfred:1 provides:6 node:11 location:1 denis:1 dilkina:1 c2:1 replication:1 shorthand:1 prove:1 combine:1 dan:1 behavioral:1 introduce:1 darwiche:3 x0:4 manner:2 pairwise:7 andrea:1 behavior:1 salakhutdinov:1 nppp:1 curse:1 solver:12 considering:1 increasing:1 spain:1 project:2 notation:1 moreover:1 maximizes:2 panel:3 cassio:1 maxa:9 developed:1 dror:1 transformation:1 guarantee:3 every:2 fink:1 exactly:2 grant:1 intervention:2 treat:1 tsinghua:2 limit:3 encoding:2 jiang:2 abest:8 suggests:2 challenging:1 acknowledgment:1 union:2 practice:1 digit:5 procedure:6 empirical:1 universal:1 maxx:3 cascade:9 thought:1 matching:1 davy:1 word:1 get:2 cannot:1 close:1 bh:1 scheduling:1 context:1 vaughan:1 optimize:1 equivalent:1 map:37 maximizing:2 independently:1 unify:1 correcting:1 array:1 lamblin:1 embedding:1 searching:2 autonomous:1 target:3 imagine:1 suppose:9 pt:2 exact:7 losing:2 programming:2 us:3 element:8 approximated:1 satisfying:1 located:1 ising:3 subproblem:2 solved:3 worst:1 rina:3 movement:1 counter:2 sol:3 complexity:3 solving:5 tight:1 joint:3 k0:17 represented:1 various:1 ole:1 query:10 artificial:6 outcome:1 whose:1 encoded:1 stanford:2 solve:7 larger:4 supplementary:3 widely:1 otherwise:4 ace:1 heuristic:1 nineteenth:1 bistra:1 think:2 transform:1 final:1 n2m:2 timeout:1 propose:3 turned:1 iff:1 poorly:1 flexibility:1 intuitive:1 ijcai:2 empty:3 jaeger:1 adam:1 piyush:1 coupling:9 develop:2 completion:3 op:1 sa:21 c:3 involves:2 come:2 implies:2 trading:1 larochelle:1 direction:1 sabharwal:5 mmap:18 stochastic:2 ermon:8 material:3 exchange:1 jiarong:1 preliminary:1 opt:1 tighter:1 ryan:1 extension:1 hold:5 ground:1 normal:1 claim:5 ruslan:1 proc:1 combinatorial:1 currently:1 largest:1 wl:2 weighted:8 rough:1 clearly:1 always:2 avoid:1 cornell:5 thirtieth:2 encode:1 ax:2 hk:18 baseline:2 am:1 posteriori:3 inference:17 helpful:1 chavira:1 marinescu:3 entire:3 integrated:1 a0:32 typically:1 hidden:2 ancestor:1 going:1 overall:1 aforementioned:1 among:4 pascal:1 animal:1 art:4 integration:4 marginal:37 field:4 equal:2 sampling:1 chernoff:1 qiang:2 represents:2 yannakakis:1 look:1 icml:3 park:2 jon:1 purchase:3 future:1 np:7 yoshua:1 simplify:1 few:2 opening:1 randomly:10 simultaneously:2 national:1 intell:1 william:2 attempt:1 message:2 replicates:8 yielding:1 light:1 citizen:1 edge:2 necessary:1 tree:1 re:3 circle:1 mip:2 theoretical:1 increased:1 instance:16 modeling:2 steer:1 earlier:1 assignment:5 maximization:8 loopy:3 varies:1 xue:3 considerably:2 synthetic:1 broeck:1 st:1 density:1 international:5 ie:2 interdisciplinary:1 probabilistic:10 yl:1 lee:1 systematic:1 ashish:5 connecting:2 quickly:1 again:1 aaai:2 choose:1 guy:1 return:15 li:2 account:1 de:1 student:1 includes:1 int:2 satisfy:2 depends:1 red:1 start:1 parallel:1 satisfiable:1 belle:1 xor:9 correspond:1 yield:2 bayesian:1 shmoys:1 manages:1 fern:1 ping:1 reach:1 definition:4 against:2 james:2 naturally:3 proof:7 ihler:5 sampled:2 proved:1 dataset:1 ask:1 knowledge:1 anytime:2 dimensionality:2 puv:1 higher:3 hashing:5 dt:3 attained:1 interational:1 jair:1 improved:1 wei:1 done:1 generality:2 furthermore:1 stage:1 achlioptas:1 until:1 christopher:1 propagation:3 incrementally:1 quality:1 hal:1 artif:2 true:8 hence:1 game:1 noted:1 mina:1 complete:6 performs:4 allen:1 stefano:5 reasoning:1 image:4 variational:5 wise:3 novel:3 recently:1 common:2 clause:9 hugo:1 exponentially:1 extend:1 approximates:2 marginals:1 significant:1 refer:1 ai:1 approx:2 rd:1 dot:2 access:1 pu:1 recent:2 scenario:3 occasionally:1 certain:1 inequality:1 binary:6 success:1 life:1 yi:3 preserving:1 captured:1 additional:3 conrad:1 maximize:4 dashed:2 ii:3 multiple:1 alan:1 match:1 believed:2 long:1 award:1 a1:1 prediction:1 variant:1 vision:1 represent:10 addition:1 lbp:16 separately:1 background:1 campos:1 completes:1 median:8 source:1 crucial:1 extra:2 unlike:1 subject:5 mod:2 effectiveness:1 call:3 integer:3 counting:12 door:1 bengio:1 easy:1 iii:1 marginalization:2 reduce:3 idea:2 cn:1 whether:3 effort:2 passerini:1 returned:1 passing:2 cnf:1 repeatedly:1 deep:5 useful:1 tune:1 transforms:4 mid:1 category:2 generate:1 percentage:1 notice:3 deteriorates:1 arising:2 write:1 discrete:4 incentive:1 affected:6 key:1 nevertheless:3 diffusion:1 graph:1 sum:14 wood:1 run:4 angle:1 uncertainty:5 family:8 almost:3 reasonable:1 reader:1 looser:2 decision:9 bound:21 layer:3 shan:1 guaranteed:2 oracle:5 strength:12 elmachtoub:1 constraint:14 bp:1 x2:3 sheldon:2 argument:2 extremely:1 optimality:1 department:3 structured:1 according:3 rai:1 radu:3 across:2 smaller:1 making:6 organizer:1 den:1 intuitively:2 restricted:1 pr:2 zhiyuan:2 bucket:11 ln:11 equation:1 mutually:1 turn:1 count:4 mechanism:3 loose:2 needed:1 end:3 apply:1 sustainability:3 amethod:9 away:1 alternative:1 compiling:1 original:7 remaining:1 include:3 running:1 completed:1 graphical:3 log2:5 log10:2 daum:1 pushing:1 build:1 establish:2 sandwiched:1 purchased:1 objective:4 noticed:1 already:2 added:1 moshe:1 strategy:5 said:1 mapped:1 mail:1 enforcing:1 code:2 relationship:2 illustration:1 providing:2 mini:1 equivalently:1 difficult:1 negative:2 design:6 implementation:2 policy:2 xors:1 wl0:6 perform:1 upper:6 pei:1 twenty:1 markov:1 benchmark:1 finite:3 supporting:1 relational:1 hinton:1 ninth:1 david:1 namely:1 pair:3 learned:2 barcelona:1 hour:2 nip:2 able:5 beyond:1 dimitris:1 challenge:1 max:17 memory:4 explanation:1 belief:6 event:1 hybrid:1 indicator:3 ready:1 text:1 popovici:1 taming:1 marginalizing:2 relative:1 embedded:2 multiagent:1 mixed:23 geoffrey:1 foundation:1 integrate:1 agent:1 share:1 row:4 supported:1 parity:9 formal:1 side:1 understand:2 bias:1 institute:2 fall:1 taking:1 benefit:1 distributed:1 van:1 default:1 xn:1 world:1 unweighted:5 computes:2 selman:6 commonly:1 author:1 forward:1 saa:25 far:1 social:2 meel:1 approximate:7 keep:1 global:2 decides:1 uai:5 sat:5 summing:1 gomes:8 conclude:2 conservation:1 mau:1 don:1 search:3 latent:2 iterative:1 learn:2 reasonably:1 domain:5 main:5 spread:4 whole:2 arise:2 n2:2 vardi:1 aamas:1 x1:4 exponential:2 ian:1 theorem:11 down:1 minute:1 embed:1 evidence:1 intractable:11 exists:1 mnist:1 false:6 albeit:1 effectively:3 supratik:1 budget:4 gap:1 intersection:1 carla:7 likely:4 applies:1 corresponds:1 truth:1 satisfies:2 goal:3 formulated:1 towards:1 aobb:2 hard:5 typical:2 folded:1 uniformly:3 except:1 averaging:5 reducing:1 lemma:6 called:1 people:2 mark:1 alexander:5 evaluate:4 crowdsourcing:1 |
6,039 | 6,463 | PerforatedCNNs: Acceleration through Elimination
of Redundant Convolutions
Michael Figurnov1,2 , Aijan Ibraimova4 , Dmitry Vetrov1,3 , and Pushmeet Kohli5
1
National Research University Higher School of Economics 2 Lomonosov Moscow State University
3
Yandex 4 Skolkovo Institute of Science and Technology 5 Microsoft Research
michael@figurnov.ru, aijan.ibraimova@gmail.com, vetrovd@yandex.ru,
pkohli@microsoft.com
Abstract
We propose a novel approach to reduce the computational cost of evaluation of
convolutional neural networks, a factor that has hindered their deployment in lowpower devices such as mobile phones. Inspired by the loop perforation technique
from source code optimization, we speed up the bottleneck convolutional layers by
skipping their evaluation in some of the spatial positions. We propose and analyze
several strategies of choosing these positions. We demonstrate that perforation
can accelerate modern convolutional networks such as AlexNet and VGG-16 by a
factor of 2? - 4?. Additionally, we show that perforation is complementary to the
recently proposed acceleration method of Zhang et al. [28].
1
Introduction
The last few years have seen convolutional neural networks (CNNs) emerge as an indispensable tool
for computer vision. However, modern CNNs have a high computational cost of evaluation, with
convolutional layers usually taking up over 80% of the time. For instance, VGG-16 network [25] for
the problem of object recognition requires 1.5 ? 1010 floating point multiplications per image. These
computational requirements hinder the deployment of such networks on systems without GPUs and
in scenarios where power consumption is a major concern, such as mobile devices.
The problem of trading accuracy of computations for speed is well-known within the software
engineering community. One of the most prominent methods for this problem is loop perforation [18,
19, 24]. In a nutshell, this technique isolates loops in the code that are not critical for the execution, and
then reduces their computational cost by skipping some iterations. More recently, researchers have
considered problem-dependent perforation strategies that exploit the structure of the problem [23].
Inspired by the general principle of perforation, we propose to reduce the computational cost of CNN
evaluation by exploiting the spatial redundancy of the network. Modern CNNs, such as AlexNet,
exploit this redundancy through the use of strides in the convolutional layers. However, using the
convolutional strides changes the architecture of the network (intermediate representations size and
the number of weights in the first fully-connected layer), which might be undesirable. Instead of
using strides, we argue for the use of interpolation (perforation) of responses in the convolutional
layer. A key element of this approach is the choice of the perforation mask, which defines the output
positions to evaluate exactly. We propose several approaches to select the perforation masks and a
method of choosing a combination of perforation masks for different layers. To restore the network
accuracy, we perform fine-tuning of the perforated network. Our experiments show that this method
can reduce the evaluation time of modern CNN architectures proposed in the literature by a factor of
2? - 4? with a small decrease in accuracy.
2
Related Work
Reducing the computational cost of CNN evaluation is an active area of research, with both highly
optimized implementations and approximate methods investigated.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
tensor U
tensor V
kernel K
data matrix M
?
?
im2row
?
?
1
?
?
?
=
?
"
? ?
??
?
? % ??
?
1
X?
?" ?
Figure 1: Reduction of convolutional layer evaluation to matrix multiplication. Our idea is to leave
only a subset of rows (defined by a perforation mask) in the data matrix M and to interpolate the
missing output values.
Implementations that exploit the parallelism available in computational architectures like GPUs
(cuda-convnet2 [13], CuDNN [3]) have allowed to significantly reduce the evaluation time of CNNs.
Since CuDNN internally reduces the computation of convolutional layers to the matrix-by-matrix
multiplication (without explicitly materializing the data matrix), our approach can potentially be
incorporated into this library. In a similar vein, the use of FPFGAs [22] leads to better tradeoffs between speed and power consumption. Several papers [5, 9] showed that CNNs may be
efficiently evaluated using low precision arithmetic, which is important for FPFGA implementations.
Most approximate methods of decreasing the CNN computational cost exploit the redundancies of
the convolutional kernel using low-rank tensor decompositions [6, 10, 16, 28]. In most cases, a
convolutional layer is replaced by several convolutional layers applied sequentially, which have a
much lower total computational cost. We show that the combination of perforation with the method
of Zhang et al. [28] improves upon both approaches.
For spatially sparse inputs, it is possible to exploit this sparsity to speed up evaluation and training [8].
While this approach is similar to ours in the spirit, we do not rely on spatially sparse inputs. Instead,
we sparsely sample the outputs of a convolutional layer and interpolate the remaining values.
In a recent work, Lebedev and Lempitsky [15] also decrease the CNN computational cost by reducing
the size of the data matrix. The difference is that their approach reduces the convolutional kernel?s
support while our approach decreases the number of spatial positions in which the convolutions are
evaluated. The two methods are complementary.
Several papers have demonstrated that it is possible to compress the parameters of the fully-connected
layers (where most CNN parameters reside) with a marginal error increase [4, 21, 27]. Since our
method does not directly modify the fully-connected layers, it is possible to combine these methods
with our approach and obtain a fast and small CNN.
3
PerforatedCNNs
The section provides a detailed description of our approach. Before proceeding further, we introduce
the notation that will be used in the rest of the paper.
Notation. A convolutional layer takes as input a tensor U of size X ? Y ? S and outputs a tensor
V of size X 0 ? Y 0 ? T , X 0 = X ? d + 1, Y 0 = Y ? d + 1. The first two dimensions are spatial
(height and width), and the third dimension is the number of channels (for example, for an RGB input
image S = 3). The set of T convolution kernels K is given by a tensor of size d ? d ? S ? T . For
simplicity of notation, we assume unit stride, no zero-padding and skip the biases. The convolutional
layer output may be defined as follows:
V (x, y, t) =
d X
d X
S
X
i=1 j=1 s=1
K(i, j, s, t)U (x + i ? 1, y + j ? 1, s)
(1)
Additionally, we define the set of all spatial indices (positions) of the output ? = {1, . . . , X 0 } ?
{1, . . . , Y 0 }. Perforation mask I ? ? is the set of indices in which the outputs are calculated exactly.
N
Denote N = |I| the number of positions to be calculated exactly, and r = 1 ? |?|
the perforation
rate.
Reduction to matrix multiplication. To achieve high computational performance, many deep learning frameworks, including Caffe [12] and MatConvNet [26], reduce the computation of convolutional
2
layers to the heavily-optimized matrix-by-matrix multiplication routine of basic linear algebra packages. This process, sometimes referred to as lowering, is illustrated in fig. 1. First, a data matrix M
of size X 0 Y 0 ? d2 S is constructed using im2row function. The rows of M are elements of patches
of input tensor U of size d ? d ? S. Then, M is multiplied by the kernel tensor K reshaped into size
d2 S ? T . The resulting matrix of size X 0 Y 0 ? T is the output tensor V , up to a reshape. For a more
detailed exposition, see [26].
3.1
Perforated convolutional layer
In this section we present the perforated convolutional layer. In a small fraction of spatial positions,
the outputs of the proposed layer are equal to the outputs of a usual convolutional layer. The
remaining values are interpolated using the nearest neighbor from this set of positions. We evaluate
other interpolation strategies in appendix A.
The perforated convolutional layer is a generalization of the standard convolutional layer. When
the perforation mask is equal to all the output spatial positions, the perforated convolutional layer?s
output equals the conventional convolutional layer?s output.
Formally, let I ? ? be the perforation mask of spatial output to be calculated exactly (the constraint
that the masks are shared for all channels of the output is required for the reduction to matrix
multiplication). The function `(x, y) : ? ? I returns the index of the nearest neighbor in I according
to Euclidean distance (with ties broken randomly):
p
(2)
`(x, y) = (`1 (x, y), `2 (x, y)) = arg min (x ? x0 )2 + (y ? y 0 )2 .
(x0 ,y 0 )?I
Note that the function `(x, y) may be calculated in advance and cached.
The perforated convolutional layer output V? is defined as follows:
V? (x, y, t) = V (`1 (x, y), `2 (x, y), t),
(3)
where V (x, y, t) is the output of the usual convolutional layer, defined by (1). Since `(x, y) = (x, y)
for (x, y) ? I, the outputs in the spatial positions I are calculated exactly. The values in other positions
are interpolated using the value of the nearest neighbor. To evaluate a perforated convolutional layer,
we only need to calculate the values V (x, y, t) for (x, y) ? I, which can be done efficiently by
reduction to matrix multiplication. In this case, the data matrix M contains just N = |I| rows, instead
of the original X 0 Y 0 = |?| rows. Perforation is not limited to this implementation of a convolutional
layer, and can be combined with other implementations that support strided convolutions, such as the
direct convolution approach of cuda-convnet2 [13].
In our implementation, we only store the output values V (x, y, t) for (x, y) ? I. The interpolation
is performed implicitly by masking the reads of the following pooling or convolutional layer. For
example, when accelerating conv3 layer of AlexNet, the interpolation cost is transferred to conv4
layer. We observe no slowdown of the conv4 layer when using GPU, and a 0-3% slowdown when
using CPU. This design choice has several advantages. Firstly, the memory size required to store
1
the activations is reduced by a factor of 1?r
. Secondly, the following non-linearity layers and 1 ? 1
convolutional layers are also sped up since they are applied to a smaller number of elements.
3.2
Perforation masks
We propose several ways of generating the perforation masks, or choosing N points from ?. We
visualize the perforation masks I as binary matrices with black squares in the positions of the set I.
We only consider the perforation masks that are independent of the input object and leave exploration
of input-dependent perforation masks to the future work.
Uniform perforation mask is just N points chosen randomly without replacement from the set ?.
However, as can be seen from fig. 2a, for N |?|, the points tend to cluster. This is undesirable
because a more scattered set I would reduce the average distance to the set I.
Grid perforation mask is a set of points I = {a(1), . . . , a(Kx )} ? {b(1), . . . , b(Ky )}, see fig. 2b.
We choose the values of a(i), b(i) using the pseudorandom integer sequence generation scheme of
[7].
Pooling structure mask exploits the structure of the overlaps of pooling operators. Denote by A(x, y)
the number of times an output of the convolutional layer is used in the pooling operators. The grid-like
pattern as in fig. 2d is caused by a pooling of size 3 ? 3 with stride 2 (such parameters are used e.g.
in Network in Network and AlexNet). The pooling structure mask is obtained by picking top-N
positions with the highest values of A(x, y), with ties broken randomly, see fig. 2c.
3
4
3
2
1
(a) Uniform
(b) Grid
(c) Pooling struc- (d) Weights A(x, y)
ture
Figure 2: Perforation masks, AlexNet conv2, r = 80.25%. Best viewed in color.
0.3
0.08
0.07
0.25
0.3
0.25
0.06
0.05
0.04
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0.03
0.02
0.01
0
0
0.4
0.09
0.35
0.08
0.3
0.07
0.25
0.06
0.05
0.2
0.04
0.15
0.03
0.1
0.02
0.05
0.01
0
(a) B(x, y), origi- (b) B(x, y), perfo- (c) Impact mask,
r = 90%
nal network
rated network
Figure 3: Top: ImageNet images and corresponding values of impact G(x, y; V ) for AlexNet conv2.
Bottom: average impacts and impact perforation mask for AlexNet conv2. Best viewed in color.
Impact mask estimates the impact of perforation of each position on the CNN loss function, and
then removes the least important positions. Denote by L(V ) the loss function of the CNN (such as
negative log-likelihood) as a function of the considered convolutional layer outputs V . Next, suppose
V 0 is obtained from V by replacing one element (x0 , y0 , t0 ) with a neutral value zero. We estimate
the impact of a position as a first-order Taylor approximation of the magnitude of change of L(V ):
X X
Y X
T
X
|L(V 0 ) ? L(V )| ?
?L(V )
(V 0 (x, y, t) ? V (x, y, t))
?V
(x,
y,
t)
x=1 y=1 t=1
?L(V )
=
V (x0 , y0 , t0 ).
?V (x0 , y0 , t0 )
(4)
?L(V )
The value ?V (x
may be obtained using backpropagation. In the case of a perforated convolu0 ,y0 ,t0 )
tional layer, we calculate the derivatives with respect to the convolutional layer output V (not the
interpolated output V? ). This makes the impact of the previously perforated positions zero and sums
the impact of the non-perforated positions over all the outputs which share the value.
Since we are interested in the total impact of a spatial position (x, y) ? ?, we take a sum over all the
channels and average this estimate of impacts over the training dataset:
T
X
?L(V )
G(x, y; V ) =
V (x, y, t)
(5)
?V
(x,
y,
t)
t=1
B(x, y) = EV ? training set G(x, y; V )
(6)
Finally, the impact mask is formed by taking the top-N positions with the highest values of B(x, y).
Examples of the values of G(x, y; V ), B(x, y) and impact mask are shown on fig. 3. Note that the
regions of the high value of G(x, y; V ) usually contain the most salient features of the image. The
averaged weights B(x, y) tend to be higher in the center since ImageNet?s images usually contain a
centered object. Additionally, a grid-like structure of pooling structure mask is automatically inferred.
4
Network
NIN
AlexNet
VGG-16
Dataset
CIFAR-10
Error
top-1 10.4%
top-5 19.6%
top-5 10.1%
ImageNet
CPU time
4.6 ms
16.7 ms
300 ms
GPU time
0.8 ms
2.0 ms
29 ms
Mem.
5.1 MB
6.6 MB
110 MB
Mult.
2.2 ? 108
0.5 ? 109
1.5 ? 1010
# conv
3
5
13
Table 1: Details of the CNNs used for the experimental evaluation. Timings, memory consumption
and number of multiplications are normalized by the batch size. Memory consumption is the memory
required to store activations (intermediate results) of the network during the forward pass.
6
4
2
0
1
2
3
4
5
CPU speedup (times)
(a) conv2, CPU
6
8
6
4
2
0
1
2
3
4
5
6
GPU speedup (times)
10
Uniform
Grid
Impact
8
Top-5 error increase (%)
8
10
Top-5 error increase (%)
10
Uniform
Grid
Pooling structure
Impact
Top-5 error increase (%)
Top-5 error increase (%)
10
6
4
2
0
1
2
3
4
5
CPU speedup (times)
(b) conv2, GPU
(c) conv3, CPU
6
8
6
4
2
0
1
2
3
4
5
6
GPU speedup (times)
(d) conv3, GPU
Figure 4: Acceleration of a single layer of AlexNet for different mask types without fine-tuning.
Values are averaged over 5 runs.
Since perforation of a layer changes the impacts of all the layers, in the experiments we iterate
between increasing the perforation rate of a layer and recalculation of impacts. We find that this
improves results by co-adapting the perforation masks of different convolutional layers.
3.3
Choosing the perforation configurations
For whole network acceleration, it is important to find a combination of per-layer perforation rates that
would achieve high speedup with low error increase. To do this, we employ a simple greedy strategy.
We use a single perforation mask type and a fixed range of increasing perforation rates. Denote by t
the evaluation time of the accelerated network and by e the objective (we use negative log-likelihood
for a subset of training images). Let t0 and e0 be the respective values for the non-accelerated network.
At each iteration, we try to increase the perforation rate for each layer and choose the layer for which
0
this results in the minimal value of the cost function e?e
t0 ?t .
4
Experiments
We use three convolutional neural networks of increasing size and computational complexity: Network in Network [17], AlexNet [14] and VGG-16 [25], see table 1. In all networks, we attempt
to perforate all the convolutional layers, except for the 1 ? 1 convolutional layers of NIN. We
perform timings on a computer with a quad-core Intel Core i5-4460 CPU, 16 GB RAM and a
nVidia Geforce GTX 980 GPU. The batch size used for timings is 128 for NIN, 256 for AlexNet
and 16 for VGG-16. The networks are obtained from Caffe Model Zoo. For AlexNet, the Caffe
reimplementation is used which is slightly different from the original architecture (pooling and
normalization layers are swapped). We use a fork of MatConvNet framework for all experiments, except for fine-tuning of AlexNet and VGG-16, for which we use a fork of Caffe. The
source code is available at https://github.com/mfigurnov/perforated-cnn-matconvnet,
https://github.com/mfigurnov/perforated-cnn-caffe.
We begin our experiments by comparing the proposed perforation masks in a common benchmark
setting: acceleration of a single AlexNet layer. Then, we compare whole-network acceleration
with the best-performing masks to baselines such as decrease of input images size and an increase
of strides. We proceed to show that perforation scales to large networks by presenting the wholenetwork acceleration results for AlexNet and VGG-16. Finally, we demonstrate that perforation is
complementary to the recently proposed acceleration method of Zhang et al. [28].
5
Method
Impact, r = 34 , 3 ? 3 filters
Impact, r = 56
Impact, r = 45
Lebedev and Lempitsky [15]
Lebedev and Lempitsky [15]
Jaderberg et al. [10]
Lebedev et al. [16]
Denton et al. [6]
CPU time ?
9.1?
5.3?
4.2?
20?
9?
6.6?
4.5?
2.7?
Error ? (%)
+1
+1.4
+0.9
top-1 +1.1
top-1 +0.3
+1
+1
+1
Table 2: Acceleration of AlexNet?s conv2. Top: our results after fine-tuning, bottom: previously
published results. Result of [10] provided by [16]. The experiment with reduced spatial size of the
kernel (3 ? 3, instead of 5 ? 5) suggests that perforation is complementary to the ?brain damage?
method of [15] which also reduces the spatial support of the kernel.
4.1
Single layer results
We explore the speedup-error trade-off of the proposed perforation masks on the two bottleneck
convolutional layers of AlexNet, conv2 and conv3, see fig. 4. The pooling structure perforation
mask is only applicable to the conv2 because it is directly followed by a max-pooling, whereas the
conv3 is followed by another convolutional layer. We see that impact perforation mask works best
for the conv2 layer while grid mask performs very well for conv3. The standard deviation of results
is small for all the perforation masks, except the uniform mask for high speedups (where the grid
mask outperforms it). The results are similar for both CPU and GPU, showing the applicability of
our method for both platforms. Note that if we consider the best perforation mask for each speedup
value, then we see that the conv2 layer is easier to accelerate than the conv3 layer. We observe this
pattern in other experiments: layers immediately followed by a max-pooling are easier to accelerate
than the layers followed by a convolutional layer. Additional results for NIN network are presented
in appendix B.
We compare our results after fine-tuning to the previously published results on the acceleration
of AlexNet?s conv2 in table 2. Motivated by the results of [15] that the spatial support of conv2
convolutional kernel may be reduced with a small error increase, we reduce the kernel?s spatial size
from 5 ? 5 to 3 ? 3 and apply the impact perforation mask. This leads to the 9.1? acceleration for
1% top-5 error increase. Using the more sophisticated method of [15] to reduce the spatial support
may lead to further improvements.
4.2
Baselines
We compare PerforatedCNNs with the baseline methods of decreasing the computational cost of
CNNs by exploiting the spatial redundancy. Unlike perforation, these methods decrease the size of
the activations (intermediate outputs) of the CNN. For a network with fully-connected (FC) layers,
this would change the number of CNN parameters in the first FC layer, effectively modifying the
architecture. To avoid this, we use CIFAR-10 NIN network, which replaces FC layers with global
average pooling (mean-pooling over all spatial positions in the last layer).
We consider the following baseline methods. Resize. The input image is downscaled with the aspect
ratio preserved. Stride. The strides of the convolutional layers are increased, making the activations
spatially smaller. Fractional stride. Motivated by fractional max-pooling [7], we introduce a more
flexible modification of strides which evaluates convolutions on a non-regular grid (with a varying
step size), providing a more fine-grained control over the activations size and speedup. We use grid
perforation mask generation scheme to choose the output positions to evaluate.
We compare these strategies to perforation of all the layers with the two types of masks which
performed best in the previous section: grid and impact. Note that ?grid? is, in fact, equivalent to
fractional strides, but with missing values being interpolated.
All the methods, except resize, require a parameter value per convolutional layer, leading to a
large number of possible configurations. We use the original network to explore this space of
configurations. For impact, we use the greedy algorithm. For stride, we evaluate all possible
combinations of parameters. For grid and fractional strides, for each layer we consider the set of rates
1 1
8 9
3 , 2 , . . . , 9 , 10 (for fractional strides this is the fraction of convolutions calculated), and evaluate all
combinations of such rates. Then, for each method, we build a Pareto-optimal front of parameters
6
15
Resize
Stride
Frac. stride
Grid
Impact
50
14.5
Top-1 error (%)
Top-1 error (%)
60
40
30
20
14
13.5
13
12.5
12
11.5
11
10
10.5
1
2
3
4
1
2
3
4
CPU speedup (times)
CPU speedup (times)
(a) Original network
(b) After retraining
Figure 5: Comparison of whole network perforation (grid and impact mask) with baseline strategies
(resizing the input images, increasing the strides of convolutional layers) for acceleration of CIFAR-10
NIN network.
which produced smallest error increase for a given CPU speedup. Finally, we train the network
weights ?from scratch? (starting from a random initialization) for the Pareto-optimal configurations
with accelerations close to 2?, 3?, 4?. For fractional strides, we use fine-tuning, since it performs
significantly better than training from scratch.
The results are displayed on fig. 5. Impact perforation is the best strategy both for the original
network and after training the network from scratch. Grid perforation is slightly worse. Convolutional
strides are used in many CNNs, such as AlexNet, to decrease the computational cost of training and
evaluation. Our results show that if changing the intermediate representations size and training the
network from scratch is an option, then it is indeed a good strategy. Although more general, fractional
strides perform poorly compared to strides, most likely because they ?downsample? the outputs of a
convolutional layer non-uniformly, making them hard to process by the next convolutional layer.
4.3
Whole network results
We evaluate the effect of perforation of all the convolutional layers of three CNN models. To tune the
perforation rates, we employ the greedy method described in section 3.3. We use twenty perforation
18 19
rates: 13 , 12 , 23 , . . . , 19
, 20 . For NIN and AlexNet we use the impact perforation mask. For VGG-16
we use the grid perforation mask as we find that it considerably simplifies fine-tuning. Using more
than one type of perforation masks does not improve the results. Obtaining the perforation rates
configuration takes about one day for the largest network we considered, VGG-16. In order to
decrease the error of the accelerated network, we tune the network?s weights. We do not observe any
problems with backpropagation, such as exploding/vanishing gradients. The results are presented
in table 3. Perforation damages the network performance significantly, but network weights tuning
restores most of the accuracy. All the considered networks may be accelerated by a factor of two
on both CPU and GPU, with under 2.6% increase of error. Theoretical speedups (reduction of the
number of multiplications) are usually close to the empirical ones. Additionally, the memory required
to store network activations is significantly reduced by storing only the non-perforated output values.
4.4
Combining acceleration methods
A promising way to achieve high speedups with low error increase is to combine multiple acceleration
methods. For this to succeed, the methods should exploit different types of redundancy in the network.
In this section, we verify that perforation can be combined with the inter-channel redundancy
elimination approach of [28] to achieve improved speedup-error ratios.
We reimplement the linear asymmetric method of [28]. It decomposes a convolutional layer with a
(d ? d ? S ? T ) kernel (height-width-input channels-output channels) into a sequence of two layers,
(d ? d ? S ? T 0 ) ? (1 ? 1 ? T 0 ? T ), T 0 < T . The second layer is typically very fast, so the
overall speedup is roughly TT0 . When decomposing a perforated convolutional layer, we transfer the
perforation mask to the first obtained layer.
We first apply perforation to the network and fine-tune it, as in the previous section. Then, we apply
the inter-channel redundancy elimination method to this network. Finally, we perform the second
round of fine-tuning with a much lower learning rate of 1e-9, due to exploding gradients. All the
methods are tested at the theoretical speedup level of 4?. When the two methods are combined, the
acceleration rate for each method is taken to be roughly equal. The results are presented in the table
7
Network Device Speedup Mult. ? Mem. ? Error ? (%) Tuned error ? (%)
2.2?
2.5?
2.0?
+1.5
+0.4
CPU
3.1?
4.4?
3.5?
+5.5
+1.9
4.2?
6.6?
4.4?
+8.3
+2.9
NIN
2.1?
3.6?
3.3?
+4.5
+1.6
GPU
3.0? 10.1? 5.7?
+18.2
+5.6
3.5? 19.1? 9.2?
+37.4
+12.4
2.0?
2.1?
1.8?
+10.7
+2.3
CPU
3.0?
3.5?
2.6?
+28.0
+6.1
3.6?
4.4?
2.9?
+60.7
+9.9
AlexNet
2.0?
2.0?
1.7?
+8.5
+2.0
GPU
3.0?
2.6?
2.0?
+16.4
+3.2
4.1?
3.4?
2.4?
+28.1
+6.2
2.0?
1.8?
1.5?
+15.6
+1.1
CPU
3.0?
2.9?
1.8?
+54.3
+3.7
4.0?
4.0?
2.5?
+71.6
+5.5
VGG-16
2.0?
1.9?
1.7?
+23.1
+2.5
GPU
3.0?
2.8?
2.4?
+65.0
+6.8
4.0?
4.7?
3.4?
+76.5
+7.3
Table 3: Full network acceleration results. Arrows indicate increase or decrease in the metric.
Speedup is the wall-clock acceleration. Mult. is a reduction of the number of multiplications in
convolutional layers (theoretical speedup). Mem. is a reduction of memory required to store the
network activations. Tuned error is the error after training from scratch (NIN) or fine-tuning (AlexNet,
VGG16) of the accelerated network?s weights.
Perforation Asymm. [28] Mult. ? Mem. ? Error ? (%) Tuned error ? (%)
4.0?
4.0?
2.5?
+71.6
+5.5
3.9?
3.9? 0.93?
+6.7
+2.0
1.8?
2.2?
4.0?
1.4?
+2.9
+1.6
Table 4: Acceleration of VGG-16, 4? theoretical speedup. First row is the proposed method, the
second row is our reimplementation of linear asymmetric method of Zhang et al. [28], the third row
is the combined method. Perforation is complementary to the acceleration method of Zhang et al.
4. While the decomposition method outperforms perforation, the combined method is better than
both of the components.
5
Conclusion
We have presented PerforatedCNNs which exploit redundancy of intermediate representations of
modern CNNs to reduce the evaluation time and memory consumption. Perforation requires only a
minor modification of the convolution layer and obtains speedups close to theoretical ones on both
CPU and GPU. Compared to the baselines, PerforatedCNNs achieve lower error, are more flexible
and do not change the architecture of a CNN (number of parameters in the fully-connected layers
and the size of the intermediate representations). Retaining the architecture allows to easily plug
in PerforatedCNNs into the existing computer vision pipelines and only perform fine-tuning of the
network, instead of complete retraining. Additionally, perforation can be combined with acceleration
methods which exploit other types of network redundancy to achieve further speedups.
In future, we plan to explore the connection between PerforatedCNNs and visual attention by
considering input-dependent perforation masks that can focus on the salient parts of the input.
Unlike recent works on visual attention [1, 11, 20] which consider rectangular crops of an image,
PerforatedCNNs can process non-rectangular and even disjoint salient parts of the image by choosing
appropriate perforation masks in the convolutional layers.
Acknowledgments. We would like to thank Alexander Kirillov and Dmitry Kropotov for helpful
discussions, and Yandex for providing computational resources for this project. This work was
supported by RFBR project No. 15-31-20596 (mol-a-ved) and by Microsoft: Moscow State University
Joint Research Center (RPD 1053945).
8
References
[1] J. Ba, R. Salakhutdinov, R. Grosse, and B. Frey, ?Learning wake-sleep recurrent attention models,? NIPS,
2015.
[2] T. Chen, ?Matrix shadow library,? https://github.com/dmlc/mshadow, 2015.
[3] S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, ?cuDNN:
Efficient primitives for deep learning,? arXiv, 2014.
[4] M. D. Collins and P. Kohli, ?Memory bounded deep convolutional networks,? arXiv, 2014.
[5] M. Courbariaux, Y. Bengio, and J. David, ?Low precision arithmetic for deep learning,? ICLR, 2015.
[6] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, ?Exploiting linear structure within
convolutional networks for efficient evaluation,? NIPS, 2014.
[7] B. Graham, ?Fractional max-pooling,? arXiv, 2014.
[8] ??, ?Spatially-sparse convolutional neural networks,? arXiv, 2014.
[9] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, ?Deep learning with limited numerical
precision,? ICML, 2015.
[10] M. Jaderberg, A. Vedaldi, and A. Zisserman, ?Speeding up convolutional neural networks with low rank
expansions,? BMVC, 2014.
[11] M. Jaderberg, K. Simonyan, A. Zisserman et al., ?Spatial transformer networks,? NIPS, 2015.
[12] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, ?Caffe:
Convolutional architecture for fast feature embedding,? ACM ICM, 2014.
[13] A. Krizhevsky, ?cuda?convnet2,? https://github.com/akrizhevsky/cuda-convnet2/, 2014.
[14] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural
networks,? NIPS, 2012.
[15] V. Lebedev and V. Lempitsky, ?Fast convnets using group-wise brain damage,? CVPR, 2016.
[16] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, ?Speeding-up convolutional neural
networks using fine-tuned CP-decomposition,? ICLR, 2015.
[17] M. Lin, Q. Chen, and S. Yan, ?Network in network,? ICLR, 2014.
[18] S. Misailovic, S. Sidiroglou, H. Hoffmann, and M. Rinard, ?Quality of service profiling,? ICSE, 2010.
[19] S. Misailovic, D. M. Roy, and M. C. Rinard, ?Probabilistically accurate program transformations,? Static
Analysis, 2011.
[20] V. Mnih, N. Heess, A. Graves et al., ?Recurrent models of visual attention,? NIPS, 2014.
[21] A. Novikov, D. Podoprikhin, A. Osokin, and D. Vetrov, ?Tensorizing neural networks,? NIPS, 2015.
[22] K. Ovtcharov, O. Ruwase, J.-Y. Kim, J. Fowers, K. Strauss, and E. S. Chung, ?Accelerating deep convolutional neural networks using specialized hardware,? Microsoft Research Whitepaper, 2015.
[23] M. Samadi, D. A. Jamshidi, J. Lee, and S. Mahlke, ?Paraprox: Pattern-based approximation for data
parallel applications,? ASPLOS, 2014.
[24] S. Sidiroglou-Douskos, S. Misailovic, H. Hoffmann, and M. Rinard, ?Managing performance vs. accuracy
trade-offs with loop perforation,? ACM SIGSOFT, 2011.
[25] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
ICLR, 2015.
[26] A. Vedaldi and K. Lenc, ?MatConvNet ? convolutional neural networks for MATLAB,? arXiv, 2014.
[27] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. J. Smola, L. Song, and Z. Wang, ?Deep fried convnets,?
ICCV, 2015.
[28] X. Zhang, J. Zou, K. He, and J. Sun, ?Accelerating very deep convolutional networks for classification and
detection,? arXiv, 2015.
9
| 6463 |@word kohli:1 cnn:15 retraining:2 d2:2 rgb:1 decomposition:3 perfo:1 reduction:7 configuration:5 contains:1 tuned:4 ours:1 outperforms:2 existing:1 freitas:1 guadarrama:1 com:6 comparing:1 skipping:2 activation:7 gmail:1 gpu:13 numerical:1 remove:1 moczulski:1 v:1 greedy:3 device:3 fried:1 podoprikhin:1 vanishing:1 core:2 provides:1 firstly:1 zhang:6 height:2 constructed:1 direct:1 combine:2 downscaled:1 introduce:2 x0:5 inter:2 mask:47 indeed:1 roughly:2 brain:2 inspired:2 salakhutdinov:1 decreasing:2 automatically:1 cpu:17 quad:1 frac:1 considering:1 increasing:4 conv:1 spain:1 begin:1 notation:3 linearity:1 provided:1 project:2 alexnet:22 bounded:1 transformation:1 nutshell:1 tie:2 exactly:5 zaremba:1 control:1 unit:1 internally:1 before:1 service:1 engineering:1 timing:3 modify:1 frey:1 pkohli:1 vetrov:1 interpolation:4 might:1 black:1 initialization:1 suggests:1 deployment:2 co:1 catanzaro:1 limited:2 range:1 averaged:2 acknowledgment:1 lecun:1 reimplementation:2 backpropagation:2 area:1 empirical:1 yan:1 significantly:4 mult:4 adapting:1 vedaldi:2 regular:1 undesirable:2 close:3 operator:2 transformer:1 conventional:1 equivalent:1 demonstrated:1 missing:2 center:2 primitive:1 economics:1 starting:1 conv4:2 attention:4 rectangular:2 simplicity:1 origi:1 immediately:1 embedding:1 oseledets:1 suppose:1 heavily:1 element:4 roy:1 recognition:2 asymmetric:2 sparsely:1 vein:1 bottom:2 fork:2 wang:1 calculate:2 region:1 connected:5 sun:1 decrease:8 highest:2 trade:2 broken:2 complexity:1 hinder:1 algebra:1 upon:1 accelerate:3 easily:1 joint:1 train:1 fast:4 choosing:5 caffe:6 cvpr:1 resizing:1 simonyan:2 reshaped:1 advantage:1 sequence:2 agrawal:1 karayev:1 propose:5 tran:1 mb:3 loop:4 combining:1 poorly:1 achieve:6 description:1 ky:1 jamshidi:1 exploiting:3 sutskever:1 cluster:1 requirement:1 nin:9 darrell:1 cached:1 generating:1 leave:2 object:3 novikov:1 recurrent:2 ganin:1 nearest:3 minor:1 school:1 skip:1 trading:1 indicate:1 shadow:1 cnns:9 filter:1 modifying:1 exploration:1 centered:1 elimination:3 require:1 generalization:1 wall:1 rpd:1 secondly:1 considered:4 visualize:1 matconvnet:4 major:1 smallest:1 applicable:1 largest:1 tool:1 offs:1 denil:1 avoid:1 mobile:2 varying:1 probabilistically:1 focus:1 improvement:1 rank:2 likelihood:2 baseline:6 ved:1 kim:1 helpful:1 tional:1 dependent:3 downsample:1 typically:1 interested:1 arg:1 overall:1 flexible:2 classification:2 retaining:1 plan:1 spatial:18 platform:1 restores:1 marginal:1 equal:4 denton:2 icml:1 future:2 few:1 strided:1 modern:5 randomly:3 employ:2 national:1 interpolate:2 floating:1 replaced:1 replacement:1 microsoft:4 attempt:1 detection:1 highly:1 mnih:1 evaluation:14 accurate:1 vandermersch:1 respective:1 euclidean:1 taylor:1 e0:1 girshick:1 theoretical:5 minimal:1 instance:1 increased:1 cost:12 applicability:1 deviation:1 subset:2 neutral:1 uniform:5 krizhevsky:2 front:1 struc:1 considerably:1 combined:6 lee:1 off:1 picking:1 michael:2 lebedev:6 choose:3 worse:1 derivative:1 leading:1 return:1 chung:1 de:1 stride:21 explicitly:1 caused:1 yandex:3 performed:2 try:1 analyze:1 option:1 parallel:1 masking:1 jia:1 square:1 formed:1 accuracy:5 convolutional:64 efficiently:2 produced:1 zoo:1 researcher:1 published:2 evaluates:1 geforce:1 static:1 dataset:2 color:2 fractional:8 improves:2 routine:1 sophisticated:1 higher:2 day:1 response:1 improved:1 zisserman:3 bmvc:1 evaluated:2 done:1 just:2 smola:1 convnets:2 clock:1 replacing:1 defines:1 perforatedcnns:8 quality:1 tt0:1 effect:1 contain:2 normalized:1 gtx:1 verify:1 spatially:4 read:1 illustrated:1 round:1 during:1 width:2 m:6 prominent:1 presenting:1 complete:1 demonstrate:2 performs:2 cp:1 image:12 wise:1 isolates:1 novel:1 recently:3 common:1 specialized:1 sped:1 cohen:1 he:1 tuning:11 grid:17 bruna:1 showed:1 recent:2 phone:1 scenario:1 indispensable:1 store:5 nvidia:1 binary:1 seen:2 additional:1 managing:1 redundant:1 exploding:2 arithmetic:2 vgg16:1 multiple:1 full:1 reduces:4 plug:1 profiling:1 long:1 cifar:3 lin:1 impact:28 basic:1 crop:1 vision:2 metric:1 arxiv:6 iteration:2 kernel:10 sometimes:1 normalization:1 preserved:1 whereas:1 fine:13 wake:1 source:2 rakhuba:1 swapped:1 rest:1 unlike:2 lenc:1 pooling:17 tend:2 spirit:1 integer:1 yang:1 intermediate:6 bengio:1 ture:1 iterate:1 architecture:8 hindered:1 reduce:9 idea:1 simplifies:1 vgg:11 tradeoff:1 bottleneck:2 t0:6 motivated:2 gb:1 accelerating:3 padding:1 song:1 materializing:1 proceed:1 matlab:1 deep:10 heess:1 detailed:2 tune:3 narayanan:1 hardware:1 reduced:4 http:4 cuda:4 disjoint:1 per:3 group:1 redundancy:9 key:1 salient:3 changing:1 figurnov:1 nal:1 lowering:1 ram:1 fraction:2 year:1 sum:2 run:1 package:1 i5:1 patch:1 appendix:2 resize:3 graham:1 layer:80 followed:4 replaces:1 sleep:1 constraint:1 chetlur:1 software:1 interpolated:4 aspect:1 speed:4 min:1 performing:1 pseudorandom:1 gpus:2 transferred:1 speedup:23 according:1 combination:5 smaller:2 slightly:2 y0:4 making:2 modification:2 icse:1 iccv:1 taken:1 pipeline:1 resource:1 previously:3 available:2 decomposing:1 kirillov:1 multiplied:1 apply:3 observe:3 appropriate:1 reshape:1 batch:2 original:5 compress:1 moscow:2 remaining:2 top:16 rfbr:1 samadi:1 rinard:3 exploit:9 build:1 tensor:9 objective:1 hoffmann:2 strategy:8 damage:3 usual:2 cudnn:3 gradient:2 iclr:4 distance:2 thank:1 consumption:5 argue:1 gopalakrishnan:1 ru:2 code:3 index:3 ratio:2 providing:2 potentially:1 negative:2 ba:1 implementation:6 design:1 conv2:12 perform:5 twenty:1 convolution:8 benchmark:1 tensorizing:1 displayed:1 hinton:1 incorporated:1 community:1 inferred:1 david:1 required:5 optimized:2 imagenet:4 connection:1 barcelona:1 nip:7 usually:4 parallelism:1 pattern:3 ev:1 sparsity:1 program:1 including:1 memory:8 max:4 power:2 critical:1 overlap:1 rely:1 restore:1 scheme:2 improve:1 github:4 technology:1 rated:1 library:2 perforated:14 speeding:2 literature:1 multiplication:10 graf:1 fully:5 loss:2 generation:2 recalculation:1 skolkovo:1 shelhamer:2 principle:1 courbariaux:1 pareto:2 share:1 storing:1 row:7 supported:1 last:2 slowdown:2 bias:1 institute:1 neighbor:3 conv3:7 taking:2 emerge:1 sparse:3 dimension:2 calculated:6 reside:1 forward:1 osokin:1 pushmeet:1 approximate:2 obtains:1 implicitly:1 dmitry:2 jaderberg:3 global:1 active:1 sequentially:1 mem:4 fergus:1 decomposes:1 table:8 additionally:5 promising:1 channel:7 transfer:1 obtaining:1 mol:1 expansion:1 investigated:1 zou:1 arrow:1 whole:4 allowed:1 complementary:5 icm:1 fig:8 referred:1 intel:1 scattered:1 grosse:1 precision:3 position:22 third:2 grained:1 donahue:1 showing:1 gupta:1 concern:1 strauss:1 effectively:1 woolley:1 magnitude:1 execution:1 kx:1 chen:2 easier:2 fc:3 explore:3 likely:1 visual:3 acm:2 succeed:1 lempitsky:5 viewed:2 acceleration:21 exposition:1 shared:1 change:5 hard:1 except:4 reducing:2 uniformly:1 total:2 pas:1 experimental:1 select:1 formally:1 support:5 collins:1 alexander:1 accelerated:5 evaluate:7 tested:1 scratch:5 |
6,040 | 6,464 | Learning Deep Embeddings with Histogram Loss
Evgeniya Ustinova and Victor Lempitsky
Skolkovo Institute of Science and Technology (Skoltech)
Moscow, Russia
Abstract
We suggest a loss for learning deep embeddings. The new loss does not introduce
parameters that need to be tuned and results in very good embeddings across a range
of datasets and problems. The loss is computed by estimating two distribution of
similarities for positive (matching) and negative (non-matching) sample pairs, and
then computing the probability of a positive pair to have a lower similarity score
than a negative pair based on the estimated similarity distributions. We show that
such operations can be performed in a simple and piecewise-differentiable manner
using 1D histograms with soft assignment operations. This makes the proposed
loss suitable for learning deep embeddings using stochastic optimization. In the
experiments, the new loss performs favourably compared to recently proposed
alternatives.
1
Introduction
Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in
image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21],
finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input
patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward
transformations, while the parameters of the transformations are learned from a large amount of
supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g.
faces of different people) in the target space. In this work, we focus on simple similarity measures
such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate
search methods, and ultimately lead to faster and more scalable systems.
Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is
relatively poorly understood. While it is not hard to write down a loss based on tuples of training
points expressing the above-mentioned objective, optimizing such a loss rarely works ?out of the
box? for complex data. This is evidenced by the broad variety of losses, which can be based on pairs,
triplets or quadruplets of points, as well as by a large number of optimization tricks employed in
recent works to reach state-of-the-art, such as pretraining for the classification task while restricting
fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22],
using complex data sampling such as mining ?semi-hard? training triplets [17]. Most of the proposed
losses and optimization tricks come with a certain number of tunable parameters, and the quality of
the final embedding is often sensitive to them.
Here, we propose a new loss function for learning deep embeddings. In designing this function
we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While
processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two
one-dimensional distributions of similarities in the embedding space are estimated, one corresponding
to similarities between matching (positive) pairs, the other corresponding to similarities between
non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
+
deep net
input batch
aggregation
embedded batch
-
similarity histograms
Figure 1: The histogram loss computation for a batch of examples (color-coded; same color indicates matching
samples). After the batch (left) is embedded into a high-dimensional space by a deep network (middle), we
compute the histograms of similarities of positive (top-right) and negative pairs (bottom-right). We then evaluate
the integral of the product between the negative distribution and the cumulative density function for the positive
distribution (shown with a dashed line), which corresponds to a probability that a randomly sampled positive
pair has smaller similarity than a randomly sampled negative pair. Such histogram loss can be minimized by
backpropagation. The only associated parameter of such loss is the number of histogram bins, to which the
results have very low sensitivity.
(as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the
overlap between the two distributions is computed by estimating the probability that the two points
sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher
similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable
manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard
backpropagation. The number of bins in the histograms is the only tunable parameter associated
with our loss, and it can be set according to the batch size independently of the data itself. In the
experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by
applying it to four different image datasets of varying complexity and nature. Comparing the new
loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss
will be used as an ?out-of-the-box? solution for learning deep embeddings that requires little tuning
and leads to close to the state-of-the-art results.
2
Related work
Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and
stochastic optimization. Below we review the loss functions that have been used in recent works.
Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15])
that deep networks trained for classification can be used for deep embedding. In particular, it is
sufficient to consider an intermediate representation arising in one of the last layers of the deep
network. The normalization is added post-hoc. Many of the works mentioned below pre-train their
embeddings as a part of the classification networks.
Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them
independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit
cosine similarity for positive pairs and the deviation from ?1 or ?0.9 for negative pairs. Perhaps,
the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in
the positive pairs and tries to maximize the distances in the negative pairs as long as these distances
are smaller than some margin M . Several works pointed to the fact that attempting to collapse all
positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this
effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as
their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based
pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage
the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main
problem with pairwise losses is that the margin parameters might be hard to tune, especially since
the distributions of distances or similarities can be changing dramatically as the learning progresses.
While most works ?skip? the burn-in period by initializing the embedding to a network pre-trained
2
for classification [25], [22] further demonstrated the benefit of admixing the classification loss during
the fine-tuning stage (which brings in another parameter).
Triplet losses. While pairwise losses care about the absolute values of distances of positive and
negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive
and negative distances (or similarities). Indeed, the embedding meets the needs of most practical
applications as long as the similarities of positive pairs are greater than similarities of negative pairs
[19, 27]. The most popular class of losses for metric learning therefore consider triplets of points
x0 , x+ , x? , where x0 , x+ form a positive pair and x0 , x? form a negative pair and measure the
difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all
triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale
embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin
in the triplet hinge-loss still represents the challenge, as well as sampling ?correct? triplets, since the
majority of them quickly become associated with zero loss. On the other hand, focusing sampling on
the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning
less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic
distance separating positive and negative pairs can vary across the embedding space (depending on
the location of x0 ), which is not possible for pairwise losses. In some situations, such added flexibility
can increase overfitting.
Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed
by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case
of quadruplet-based losses, the compared positive and negative pairs do not share a common point
(as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in
different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as
they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite
these appealing properties, quadruplet-based losses remain rarely-used and confined to ?shallow?
embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A
potential problem with quadruplet-based losses in the large-scale setting is that the number of all
quadruplets is even larger than the number of triplets. Among all groups of losses, our approach
is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep
embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner.
3
Histogram loss
We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1)
is defined for a batch of examples X = {x1 , x2 , . . . xN } and a deep feedforward network f (?; ?),
where ? represents learnable parameters of the network. We assume that the last layer of the network
performs length-normalization, so that the embedded vectors {yi = f (xi ; ?)} are L2-normalized.
We further assume that we know which elements should match to each other and which ones are
not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be ?1 if
xi and xj are known to form a negative pair (these labels can be derived from class labels or be
specified otherwise). Given {mij } and {yi } we can estimate the two probability distributions p+
and p? corresponding to the similarities in positive and negative pairs respectively. In particular
S + = {sij = hxi , xj i | mij = +1} and S ? = {sij = hxi , xj i | mij = ?1} can be regarded as
sample sets from these two distributions. Although samples in these sets are not independent, we
keep all of them to ensure a large sample size.
Given sample sets S + and S ? , we can use any statistical approach to estimate p+ and p? . The fact
that these distributions are one-dimensional and bounded to [?1; +1] simplifies the task. Perhaps,
the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we
use this approach in our experiments. We therefore consider R-dimensional histograms H + and H ? ,
2
with the nodes t1 = ?1, t2 , . . . , tR = +1 uniformly filling [?1; +1] with the step ? = R?1
. We
+
+
estimate the value hr of the histogram H at each node as:
h+
r =
1
|S + |
X
(i,j) : mij =+1
3
?i,j,r
(1)
where (i, j) spans all positive pairs of points in the batch. The weights ?i,j,r are chosen so that each
pair sample is assigned to the two adjacent nodes:
?
?(sij ? tr?1 )/?, if sij ? [tr?1 ; tr ],
?i,j,r = (tr+1 ? sij )/?, if sij ? [tr ; tr+1 ],
(2)
?
0,
otherwise .
We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The
estimation of H ? proceeds analogously. Note, that the described approach is equivalent to using
?triangular? kernel for density estimation; other kernel functions can be used as well [2].
Once we have the estimates for the distributions p+ and p? , we use them to estimate the probability
of the similarity in a random negative pair to be more than the similarity in a random positive pair (
the probability of reverse). Generally, this probability can be estimated as:
Z x
Z 1
Z 1
preverse =
p? (x)
p+ (y) dy dx =
p? (x) ?+ (x) dx = Ex?p? [?+ (x)] ,
(3)
?1
?1
?1
+
where ? (x) is the CDF (cumulative density function) of p+ (x). The integral (3) can then be
approximated and computed as:
!
R
r
R
X
X
X
?
+
+
L(X, ?) =
hr
hq =
h?
(4)
r ?r ,
r=1
q=1
r=1
where L is our loss function (the histogram loss) computed forPthe batch X and the embedding
r
+
parameters ?, which approximates the reverse probability; ?+
r =
q=1 hq is the cumulative sum of
the histogram H + .
Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ? S + and s ? S ? . Indeed,
Pr
PR
?L
?L
+
= q=r h?
it is straightforward to obtain ?h
? =
q from (4). Furthermore, from
q=1 hq and ?h+
r
r
(1) and (2) it follows that:
?
+1
?
? ?|S + | , if sij ? [tr?1 ; tr ],
+
?hr
?1
= ?|S
(5)
if sij ? [tr ; tr+1 ],
+| ,
?
?sij
?
0,
otherwise ,
?h?
?s
?s
for any sij such that mij = +1 (and analogously for ?sijr ). Finally, ?xiji = xj and ?xijj = xi .
One can thus backpropagate the loss to the scalar product similarities, then further to the individual
embedded points, and then further into the deep embedding network.
Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities
for positive and negative pairs in a semi-parametric ways (using histograms), and then computes
the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in
the batch and to estimate this probability from such set of pairs of pairs. This would correspond
to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch,
however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive
sampling impractical. This is in contrast to our loss, for which the separation into two stages brings
down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is
introduced in [24]. The training is done pairwise, but the threshold separating positive and negative
pairs is also learned.
We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more
similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero
margin into our method by defining the loss to be:
!
r+?
R
X
X
?
+
L? (X, ?) =
hr
hq ,
(6)
r=1
q=1
where the new loss effectively enforces the margin ? ?. We however do not use such modification in
our experiments (preliminary experiments do not show any benefit of introducing the margin).
????????????????????????4
CUHK03
CUB-200-2011
90
80
Recall@K, %
Recall@K, %
90
70
60
40
1
2
4
K
8
16
70
60
0.04
0.02
0.01
0.005
50
80
501
32
256 hist
128 hist
64 hist
5
10
K
15
20
Figure 2: (left) - Recall@K for the CUB-200-2011 dataset for the Histogram loss (4). Different curves
correspond to variable histogram step ?, which is the only parameter inherent to our loss. The curves are very
similar for CUB-200-2011. (right) - Recall@K for the CUHK03 labeled dataset for different batch sizes. Results
for batch size 256 is uniformly better than those for smaller values.
4
Experiments
In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art
performance on these datasets.
Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only
of its use in person re-identification approaches, in our experiments it performed very well for product
image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss
reported in [21], once its parameters are tuned. The binomial deviance loss is defined as:
Jdev =
X
wi,j ln(exp??(si,j ??)mi,j +1),
(7)
i,j?I
where I is the set of training image indices, and si,j is the similarity measure between ith and jth
images (i.e. si,j = cosine(xi , xj ).
Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively:
1
1,if (i, j) is a positive pair,
n1 ,if (i, j) is a positive pair,
mi,j =
wi,j =
1
?C,if (i, j) is a negative pair,
n ,if (i, j) is a negative pair,
(8)
2
where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch)
correspondingly, ? and ? are hyper-parameters. Parameter C is the negative cost for balancing
weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest
that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we
report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification
datasets, and with C = 25 that is close to optimal for the product and bird datasets.
We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on
CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification
datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet
sampling strategy that was shown in [21] to outperform standard triplet-based loss.
Additionally, we performed experiments for the triplet loss [18] that uses ?semi-hard negative? triplet
sampling. Such sampling considers only triplets violating the margin, but still having the positive
distance smaller than the negative distance.
5
90
80
70
60
50
40
30
201
2
4
Online Products
Histogram
LSSS
Binomial Deviance, c=10
Binomial Deviance, c=25
Triplet semi-hard
GoogLeNet pool5
Contrastive (from [21])
Triplet (from [21])
K
8
16
Recall@K, %
Recall@K, %
CUB-200-2011
32
90
80
70
60
50
40
30
201
10
Histogram
LSSS
Binomial Deviance, c=10
Binomial Deviance, c=25
Triplet semi-hard
GoogLeNet pool5
Contrastive (from [21])
Triplet (from [21])
K
100
1000
Figure 3: Recall@K for (left) - CUB-200-2011 and (right) - Online Products datasets for different methods.
Results for the Histogram loss (4), Binomial Deviance (7), LSSS [21] and Triplet [18] losses are present.
Binomial Deviance loss for C = 25 outperforms all other methods. The best-performing method is Histogram
loss. We also include results for contrastive and triplet losses from [21].
Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the
four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All
these datasets have been used for evaluating methods of solving embedding learning tasks.
The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds
species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes
for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes.
Classes correspond to a number of online products from eBay.com. There are approximately 5.3
images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are
used for training and 11,316 classes (60,502 images) are used for testing. The images from the
CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original
aspect ratio (padding is done when needed).
The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164
images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two
cameras and has 4.8 images in each camera on average. Following most of the previous works we use
the ?CUHK03-labeled? version of the dataset with manually-annotated bounding boxes. According
to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100
for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is
provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians,
each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly
into the test set of 750 identities and the train set of 751 identities.
Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and
Online products, every test image is used as the query in turn and remaining images are used as the
gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that
one image for each identity from the test set is chosen randomly in each of its two camera views.
Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a
given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other
works), as there are many images of the same person in the gallery set.
Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used
the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the
?pool5? and the inner product layers, while the last layer is used to compute the embedding vectors.
The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch.
As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than
1
Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity.
6
CUHK03
80
Recall@K, %
Recall@K, %
90
80
70
Histogram
Binomial Deviance, c=10
Binomial Deviance, c=25
LSSS
Triplet semi-hard
60
501
Market-1501
90
5
10
K
15
70
60
Histogram
Binomial Deviance, c=10
Binomial Deviance, c=25
LSSS
Triplet semi-hard
50
20
1
5
10
K
15
20
Figure 4: Recall@K for (left) - CUHK03 and (right) - Market-1501 datasets. The Histogram loss (4) outperforms
Binomial Deviance, LSSS and Triplet losses.
the learning rate of the last layer. We set the embedding size to 512 for all the experiments with
this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the
architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number
and the parameters value (for the former) are chosen using the validation set.
For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture
introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and
upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed
by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have
shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the
concatenated outputs of the three streams as an input.
Implementation details. For all the
experiments with loss functions (4)
and (7) we used quadratic number
of pairs in each batch (all the pairs
that can be sampled from batch). For
triplet loss ?semi-hard? triplets choDataset
r=1
r = 5 r = 10 r = 15 r = 20 sen from all the possible triplets in the
CUHK03
65.77 92.85 97.62 98.94 99.43 batch are used. For comparison with
Market-1501 59.47 80.73 86.94 89.28 91.09 other methods the batch size was set
to 128. We sample batches randomly
in such a way that there are several
images for each sampled class in the batch. We iterate over all the classes and all the images
corresponding to the classes, sampling images in turn. The sequences of the classes and of the
corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include
more than ten images per class on average, so we limit the number of images of the same class in the
batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization
in all of the experiments. For all losses the learning rate is set to 1e ? 4 for all the experiments
except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e ? 5
more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K
iterations, for the other experiments learning rate was fixed. The iterations number for each method
was chosen using the validation set.
Table 1: Final results for CUHK03-labeled and Market-1501. For
CUHK03-labeled results for 5 random splits were averaged. Batch
of size 256 was used for both experiments.
Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03
and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the
best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously
checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the
optimal one. We also observed that with smaller values of C the results are significantly worse than
7
(a)
(b)
(c)
(d)
Figure 5: Histograms for positive and negative distance distributions on the CUHK03 test set for: (a) Initial
state: randomly initialized net, (b) Network training with the Histogram loss, (c) same for the Binomial Deviance
loss, (d) same for the LSSS loss. Red is for negative pairs, green is for positive pairs. Negative cosine distance
measure is used for Histogram and Binomial Deviance losses, Euclidean distance is used for the LSSS loss.
Initially the two distributions are highly overlapped. For the Histogram loss the distribution overlap is less than
for the LSSS.
those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03
the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance
loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the
figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test
set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset
our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the
experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25
than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the
model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400,
200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data
are estimated by distributions of similarities within mini-batches. Therefore we also show results
for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more
preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K
for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present
our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03,
Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results
corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To
summarize the results of the comparison: the new (Histogram) loss gives the best results on the two
person re-identification problems. For CUB-200-2011 and Online Products it came very close to the
best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed
the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly,
the new loss does not require to tune parameters associated with it (though we have found learning
with our loss to be sensitive to the learning rate).
5
Conclusion
In this work we have suggested a new loss function for learning deep embeddings, called the
Histogram loss. Like most previous losses, it is based on the idea of making the distributions of
the similarities of the positive and negative pairs less overlapping. Unlike other losses used for
deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also
incorporates information across a large number of quadruplets formed from training samples in
the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated
the competitive results of the new loss on a number of datasets. In particular, the Histogram loss
outperformed other losses for the person re-identification problem on CUHK03 and Market-1501
datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss.
Acknowledgement: This research is supported by the Russian Ministry of Science and Education
grant RFMEFI57914X0071.
References
[1] R. Arandjelovi?c, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. Netvlad: Cnn architecture for weakly supervised
place recognition. IEEE International Conference on Computer Vision, 2015.
[2] A. Bowman and A. Azzalini. Applied smoothing techniques for data analysis. Number 18 in Oxford
statistical science series. Clarendon Press, Oxford, 1997.
8
[3] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. S?ckinger, and R. Shah. Signature
verification using a ?siamese? time delay neural network. International Journal of Pattern Recognition
and Artificial Intelligence, 7(04):669?688, 1993.
[4] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through
ranking. The Journal of Machine Learning Research, 11:1109?1135, 2010.
[5] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to
face verification. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR 2005), 20-26 June 2005, San Diego, CA, USA, pp. 539?546, 2005.
[6] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[7] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
[8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. Advances in neural information processing systems (NIPS), pp. 1097?1105, 2012.
[9] M. Law, N. Thome, and M. Cord. Quadruplet-wise image similarity learning. Proceedings of the IEEE
International Conference on Computer Vision, pp. 249?256, 2013.
[10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541?551, 1989.
[11] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person reidentification. 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus,
OH, USA, June 23-28, 2014, pp. 152?159, 2014.
[12] J. Lin, O. Mor?re, V. Chandrasekhar, A. Veillard, and H. Goh. Deephash: Getting regularization, depth and
fine-tuning right. CoRR, abs/1501.04711, 2015.
[13] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. Proceedings of the British Machine
Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015, pp. 41.1?41.12, 2015.
[14] Q. Qian, R. Jin, S. Zhu, and Y. Lin. Fine-grained visual categorization via multi-stage metric learning.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3716?3724, 2015.
[15] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features off-the-shelf: An astounding
baseline for recognition. IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops
2014, Columbus, OH, USA, June 23-28, 2014, pp. 512?519, 2014.
[16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International
Journal of Computer Vision (IJCV), 115(3):211?252, 2015.
[17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and
clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815?823,
2015.
[18] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and
clustering. IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA,
June 7-12, 2015, pp. 815?823, 2015.
[19] M. Schultz and T. Joachims. Learning a distance metric from relative comparisons. Advances in neural
information processing systems (NIPS), p. 41, 2004.
[20] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer. Discriminative learning
of deep convolutional feature point descriptors. Proceedings of the IEEE International Conference on
Computer Vision, pp. 118?126, 2015.
[21] H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature
embedding. Computer Vision and Pattern Recognition (CVPR), 2016.
[22] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification.
Advances in Neural Information Processing Systems, pp. 1988?1996, 2014.
[23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 1?9, 2015.
[24] O. Tadmor, T. Rosenwein, S. Shalev-Shwartz, Y. Wexler, and A. Shashua. Learning a metric embedding
for face recognition using the multibatch method. NIPS, 2016.
[25] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance
in face verification. Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp.
1701?1708. IEEE, 2014.
[26] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset.
(CNS-TR-2011-001), 2011.
[27] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification.
The Journal of Machine Learning Research, 10:207?244, 2009.
[28] D. Yi, Z. Lei, and S. Z. Li. Deep metric learning for practical person re-identification. arXiv prepzrint
arXiv:1407.4979, 2014.
[29] J. ?bontar and Y. LeCun. Stereo matching by training a convolutional neural network to compare image
patches. arXiv preprint arXiv:1510.05970, 2015.
[30] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark.
Computer Vision, IEEE International Conference on, 2015.
[31] W.-S. Zheng, S. Gong, and T. Xiang. Reidentification by relative distance comparison. Pattern Analysis
and Machine Intelligence, IEEE Transactions on, 35(3):653?668, 2013.
9
| 6464 |@word cnn:3 version:2 middle:1 polynomial:1 kokkinos:1 wexler:1 contrastive:6 tr:12 shot:2 moment:1 initial:1 liu:1 series:1 score:2 swansea:1 tuned:4 interestingly:1 outperforms:3 guadarrama:1 comparing:1 com:2 si:3 assigning:1 dx:2 moreno:1 drop:1 intelligence:2 ith:1 node:4 location:1 firstly:1 bowman:1 become:1 pairing:1 consists:1 ijcv:1 fitting:1 introduce:1 manner:3 x0:4 pairwise:13 market:16 indeed:2 multi:2 little:1 spain:1 estimating:2 unrelated:1 bounded:1 evgeniya:1 provided:1 linearity:1 kind:1 minimizes:1 unified:2 finding:1 transformation:2 impractical:1 mitigate:1 every:2 preferable:1 wrong:1 ustinova:1 uk:1 unit:1 grant:1 organize:1 positive:38 t1:1 understood:1 tends:1 limit:1 despite:2 oxford:2 meet:1 interpolation:1 approximately:1 might:1 burn:1 bird:4 branson:1 collapse:1 range:2 tian:2 averaged:3 practical:2 camera:5 enforces:1 testing:3 lecun:4 backpropagation:3 sullivan:1 significantly:2 vedaldi:1 matching:6 chechik:1 pre:2 deviance:21 suggest:2 close:4 applying:1 equivalent:1 demonstrated:2 straightforward:1 independently:2 l:12 hadsell:1 shen:1 qian:1 regarded:1 importantly:2 oh:2 embedding:24 target:1 play:1 diego:1 us:2 designing:1 trick:2 element:1 overlapped:1 approximated:1 recognition:19 located:1 submission:1 labeled:4 bottom:1 role:1 observed:4 preprint:2 initializing:1 wang:4 cord:1 sun:1 ranzato:1 ordering:2 mentioned:3 complexity:2 kalenichenko:2 ultimately:2 signature:1 trained:3 weakly:1 solving:1 purely:1 joint:1 various:1 train:2 fast:2 describe:1 pool5:3 effective:1 query:2 artificial:1 corresponded:1 hyper:1 shalev:1 exhaustive:1 caffe:2 larger:2 cvpr:6 otherwise:3 triangular:1 itself:1 final:4 online:13 reproduced:1 hoc:1 sequence:1 differentiable:3 karayev:1 net:2 sen:1 propose:1 product:21 combining:1 poorly:1 achieve:1 flexibility:2 getting:2 sutskever:1 double:1 darrell:1 categorization:1 adam:2 depending:1 gong:1 pose:1 nearest:1 progress:1 implemented:1 skip:1 come:2 correct:1 annotated:1 filter:1 stochastic:4 human:1 bin:6 education:1 require:1 thome:1 behaviour:1 fix:1 preliminary:1 proximity:2 bromley:1 exp:2 vary:1 cub:17 estimation:2 outperformed:2 schroff:2 label:2 jackel:1 sensitive:4 hubbard:1 hope:1 avoid:2 shelf:1 resized:1 lifted:3 varying:1 azizpour:1 derived:1 focus:1 june:4 joachim:1 indicates:1 contrast:3 baseline:3 rigid:1 softly:1 typically:1 initially:1 perona:1 relation:1 going:1 biometric:1 overall:1 classification:9 among:2 art:5 constrained:1 softmax:2 smoothing:1 equal:3 once:2 aware:1 having:1 sampling:8 manually:1 ckinger:1 represents:2 broad:1 hardest:1 excessive:1 filling:1 theart:1 minimized:1 report:2 t2:1 piecewise:2 inherent:1 dml:1 randomly:6 individual:1 astounding:1 cns:1 versatility:1 n1:2 ab:2 mining:1 highly:2 zheng:2 evaluation:3 henderson:1 chain:1 reidentification:2 integral:2 encourage:2 simo:1 euclidean:2 savarese:1 penalizes:1 re:10 initialized:1 shalit:1 goh:1 girshick:1 soft:1 assignment:2 rabinovich:1 cost:1 introducing:1 deviation:2 entry:1 delay:1 krizhevsky:1 welinder:1 reported:3 arandjelovi:1 person:9 density:3 international:6 sensitivity:1 off:1 analogously:2 quickly:1 huang:1 russia:1 worse:1 multibatch:1 strive:1 zhao:1 li:2 szegedy:1 account:1 potential:1 includes:5 pedestrian:4 ranking:1 depends:1 stream:4 performed:3 later:1 try:1 view:1 razavian:1 philbin:2 red:1 competitive:1 aggregation:1 shashua:1 jia:2 minimize:1 formed:1 convolutional:4 descriptor:2 characteristic:1 correspond:5 spaced:1 identification:10 handwritten:1 produced:1 confirmed:1 russakovsky:1 reach:1 checked:1 pp:13 obvious:1 associated:4 mi:3 sampled:5 tunable:2 dataset:14 popular:3 recall:17 color:2 knowledge:1 torso:3 sophisticated:1 focusing:1 feed:3 clarendon:1 higher:1 supervised:2 violating:1 zisserman:1 bmvc:1 fua:1 done:3 box:3 evaluated:2 though:1 furthermore:2 stage:6 convnets:1 hand:1 favourably:1 su:1 overlapping:1 brings:2 quality:3 perhaps:2 columbus:2 lei:1 russian:1 usa:4 effect:1 normalized:1 deepreid:1 former:1 regularization:1 assigned:1 shuffled:1 moore:1 adjacent:1 quadruplet:18 during:1 cosine:3 demonstrate:1 performs:2 image:32 wise:1 recently:1 common:1 discussed:1 googlenet:5 approximates:1 mor:1 expressing:1 anguelov:1 tuning:4 ebay:1 similarly:1 pointed:1 closing:1 hxi:2 tadmor:1 stable:1 similarity:36 supervision:1 etc:1 recent:4 quartic:1 optimizing:1 reverse:4 certain:1 outperforming:1 came:1 yi:3 victor:1 caltech:1 seen:1 captured:2 greater:1 care:1 ministry:1 zip:1 employed:1 deng:1 aggregated:1 maximize:1 period:1 sharma:1 dashed:1 semi:8 multiple:1 siamese:1 faster:1 match:3 cross:1 long:4 retrieval:1 lin:2 divided:1 post:1 coded:1 scalable:2 vision:15 metric:11 arxiv:6 histogram:39 normalization:2 kernel:2 iteration:3 confined:1 penalize:1 fine:5 krause:1 decreased:1 crucial:1 unlike:1 pooling:1 virtually:1 incorporates:1 chopra:1 yang:1 bernstein:1 intermediate:1 embeddings:17 feedforward:1 forpthe:1 variety:1 xj:6 rendering:1 easy:1 split:7 architecture:9 relu:1 iterate:1 affect:1 inner:2 simplifies:1 idea:1 six:1 padding:1 song:1 trulls:1 stereo:1 pretraining:1 deep:29 dramatically:1 generally:2 tune:2 karpathy:1 amount:1 nonparametric:1 ten:3 http:1 outperform:1 estimated:5 arising:1 per:1 write:1 group:1 four:2 threshold:2 nevertheless:1 changing:1 prevent:1 bentz:1 sum:2 taigman:1 fourth:1 place:1 almost:1 guyon:1 cuhk03:24 separation:1 patch:1 dy:1 scaling:1 layer:11 followed:1 correspondence:1 quadratic:2 fei:2 x2:1 interpolated:1 aspect:1 span:1 attempting:1 performing:1 relatively:1 structured:3 according:2 across:4 smaller:6 remain:2 wi:3 appealing:1 shallow:1 modification:1 making:1 leg:1 sij:10 pr:2 ln:1 equation:1 previously:1 turn:2 needed:1 know:1 available:1 operation:2 denker:1 noguer:1 ubiquity:1 alternative:2 batch:31 shah:1 weinberger:1 original:1 moscow:1 top:2 ensure:1 binomial:21 include:2 remaining:2 clustering:2 hinge:3 concatenated:1 k1:1 especially:1 society:1 objective:2 added:2 parametric:2 strategy:1 september:1 hq:4 distance:19 mapped:1 separating:2 majority:1 considers:1 bengio:1 gallery:4 length:1 code:2 index:1 reed:1 mini:3 ratio:1 sermanet:1 relate:1 pajdla:1 negative:38 ba:1 implementation:1 satheesh:1 perform:1 allowing:1 upper:1 convolution:3 datasets:22 howard:1 benchmark:1 jin:1 situation:2 defining:1 looking:1 head:1 hinton:1 ucsd:1 introduced:3 evidenced:1 pair:61 specified:1 imagenet:3 wah:1 sivic:1 learned:2 boser:1 barcelona:1 boost:1 nip:4 kingma:1 beyond:1 suggested:2 proceeds:1 below:2 pattern:13 appeared:1 challenge:3 summarize:1 pioneering:1 max:1 green:1 including:1 azzalini:1 suitable:1 overlap:3 hr:4 zhu:1 github:1 technology:1 review:1 epoch:1 l2:1 acknowledgement:1 carlsson:1 relative:4 law:1 embedded:4 loss:127 xiang:2 discriminatively:1 skolkovo:1 validation:3 shelhamer:1 degree:1 vanhoucke:1 jegelka:1 verification:5 sufficient:1 xiao:1 share:1 balancing:1 supported:1 last:6 free:1 keeping:1 jth:1 allow:2 deeper:1 institute:1 wide:1 fall:1 face:9 correspondingly:2 saul:1 absolute:1 sparse:1 serra:1 benefit:2 neighbor:1 curve:2 depth:1 xn:1 evaluating:2 cumulative:3 unaware:1 computes:1 forward:3 commonly:1 san:1 schultz:1 erhan:1 transaction:1 approximate:1 implicitly:1 keep:1 overfitting:2 reveals:1 hist:3 belongie:1 tuples:1 xi:5 discriminative:1 shwartz:1 search:3 triplet:35 khosla:1 table:2 additionally:3 nature:1 ca:1 investigated:1 complex:3 bottou:1 protocol:2 dense:1 main:1 linearly:1 bounding:1 n2:1 x1:1 candidate:1 donahue:1 grained:1 tang:1 down:2 british:1 ferraz:1 favourable:1 learnable:1 workshop:1 restricting:1 effectively:1 corr:2 margin:12 chen:1 gap:1 boston:1 backpropagate:1 entropy:1 visual:3 contained:1 scalar:2 pretrained:1 deepface:1 mij:7 corresponds:1 wolf:1 cdf:1 ma:2 lempitsky:1 identity:6 sorted:1 torii:1 shared:1 hard:9 parkhi:1 except:1 uniformly:5 semantically:1 called:1 specie:1 experimental:1 rarely:2 ilsvrc:1 berg:1 people:1 facenet:2 incorporate:1 evaluate:1 scratch:1 ex:1 |
6,041 | 6,465 | R-FCN: Object Detection via
Region-based Fully Convolutional Networks
Jifeng Dai
Microsoft Research
Yi Li?
Tsinghua University
Kaiming He
Microsoft Research
Jian Sun
Microsoft Research
Abstract
We present region-based, fully convolutional networks for accurate and efficient
object detection. In contrast to previous region-based detectors such as Fast/Faster
R-CNN [7, 19] that apply a costly per-region subnetwork hundreds of times, our
region-based detector is fully convolutional with almost all computation shared on
the entire image. To achieve this goal, we propose position-sensitive score maps
to address a dilemma between translation-invariance in image classification and
translation-variance in object detection. Our method can thus naturally adopt fully
convolutional image classifier backbones, such as the latest Residual Networks
(ResNets) [10], for object detection. We show competitive results on the PASCAL
VOC datasets (e.g., 83.6% mAP on the 2007 set) with the 101-layer ResNet.
Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20?
faster than the Faster R-CNN counterpart. Code is made publicly available at:
https://github.com/daijifeng001/r-fcn.
1
Introduction
A prevalent family [9, 7, 19] of deep networks for object detection can be divided into two subnetworks
by the Region-of-Interest (RoI) pooling layer [7]: (i) a shared, ?fully convolutional? subnetwork
independent of RoIs, and (ii) an RoI-wise subnetwork that does not share computation. This
decomposition [9] was historically resulted from the pioneering classification architectures, such
as AlexNet [11] and VGG Nets [24], that consist of two subnetworks by design ? a convolutional
subnetwork ending with a spatial pooling layer, followed by several fully-connected (fc) layers. Thus
the (last) spatial pooling layer in image classification networks is naturally turned into the RoI pooling
layer in object detection networks [9, 7, 19].
But recent state-of-the-art image classification networks such as Residual Nets (ResNets) [10] and
GoogLeNets [25, 27] are by design fully convolutional2 . By analogy, it appears natural to use
all convolutional layers to construct the shared, convolutional subnetwork in the object detection
architecture, leaving the RoI-wise subnetwork no hidden layer. However, as empirically investigated
in this work, this na?ve solution turns out to have considerably inferior detection accuracy that does
not match the network?s superior classification accuracy. To remedy this issue, in the ResNet paper
[10] the RoI pooling layer of the Faster R-CNN detector [19] is unnaturally inserted between two
sets of convolutional layers ? this creates a deeper RoI-wise subnetwork that improves accuracy, at
the cost of lower speed due to the unshared per-RoI computation.
We argue that the aforementioned unnatural design is caused by a dilemma of increasing translation
invariance for image classification vs. respecting translation variance for object detection. On one
hand, the image-level classification task favors translation invariance ? shift of an object inside an
image should be indiscriminative. Thus, deep (fully) convolutional architectures that are as translation?
2
This work was done when Yi Li was an intern at Microsoft Research.
Only the last layer is fully-connected, which is removed and replaced when fine-tuning for object detection.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
top-left top-center
?...
bottom-right
k
2
k (C+1)-d
conv
conv
vote
RoI
pool
image
softmax
C+1
k
feature
maps
C+1
2
k (C+1)
C+1
position-sensitive
score maps
Figure 1: Key idea of R-FCN for object detection. In this illustration, there are k ? k = 3 ? 3
position-sensitive score maps generated by a fully convolutional network. For each of the k ? k bins
in an RoI, pooling is only performed on one of the k 2 maps (marked by different colors).
Table 1: Methodologies of region-based detectors using ResNet-101 [10].
depth of shared convolutional subnetwork
depth of RoI-wise subnetwork
R-CNN [8]
Faster R-CNN [20, 10]
R-FCN [ours]
0
101
91
10
101
0
invariant as possible are preferable as evidenced by the leading results on ImageNet classification
[10, 25, 27]. On the other hand, the object detection task needs localization representations that are
translation-variant to an extent. For example, translation of an object inside a candidate box should
produce meaningful responses for describing how good the candidate box overlaps the object. We
hypothesize that deeper convolutional layers in an image classification network are less sensitive
to translation. To address this dilemma, the ResNet paper?s detection pipeline [10] inserts the RoI
pooling layer into convolutions ? this region-specific operation breaks down translation invariance,
and the post-RoI convolutional layers are no longer translation-invariant when evaluated across
different regions. However, this design sacrifices training and testing efficiency since it introduces a
considerable number of region-wise layers (Table 1).
In this paper, we develop a framework called Region-based Fully Convolutional Network (R-FCN)
for object detection. Our network consists of shared, fully convolutional architectures as is the case of
FCN [16]. To incorporate translation variance into FCN, we construct a set of position-sensitive score
maps by using a bank of specialized convolutional layers as the FCN output. Each of these score
maps encodes the position information with respect to a relative spatial position (e.g., ?to the left of
an object?). On top of this FCN, we append a position-sensitive RoI pooling layer that shepherds
information from these score maps, with no weight (convolutional/fc) layers following. The entire
architecture is learned end-to-end. All learnable layers are convolutional and shared on the entire
image, yet encode spatial information required for object detection. Figure 1 illustrates the key idea
and Table 1 compares the methodologies among region-based detectors.
Using the 101-layer Residual Net (ResNet-101) [10] as the backbone, our R-FCN yields competitive
results of 83.6% mAP on the PASCAL VOC 2007 set and 82.0% the 2012 set. Meanwhile, our results
are achieved at a test-time speed of 170ms per image using ResNet-101, which is 2.5? to 20? faster
than the Faster R-CNN + ResNet-101 counterpart in [10]. These experiments demonstrate that our
method manages to address the dilemma between invariance/variance on translation, and fully convolutional image-level classifiers such as ResNets can be effectively converted to fully convolutional object
detectors. Code is made publicly available at: https://github.com/daijifeng001/r-fcn.
2
Our approach
Overview. Following R-CNN [8], we adopt the popular two-stage object detection strategy [8, 9, 6,
7, 19, 1, 23] that consists of: (i) region proposal, and (ii) region classification. Although methods that
do not rely on region proposal do exist (e.g., [18, 15]), region-based systems still possess leading
accuracy on several benchmarks [5, 14, 21]. We extract candidate regions by the Region Proposal
2
ZWE
RoIs
conv
????Z?/
conv
conv
RoI
vote
pool
feature
maps
Figure 2: Overall architecture of R-FCN. A Region Proposal Network (RPN) [19] proposes candidate
RoIs, which are then applied on the score maps. All learnable weight layers are convolutional and are
computed on the entire image; the per-RoI computational cost is negligible.
Network (RPN) [19], which is a fully convolutional architecture in itself. Following [19], we share
the features between RPN and R-FCN. Figure 2 shows an overview of the system.
Given the proposal regions (RoIs), the R-FCN architecture is designed to classify the RoIs into object
categories and background. In R-FCN, all learnable weight layers are convolutional and are computed
on the entire image. The last convolutional layer produces a bank of k 2 position-sensitive score
maps for each category, and thus has a k 2 (C + 1)-channel output layer with C object categories (+1
for background). The bank of k 2 score maps correspond to a k ? k spatial grid describing relative
positions. For example, with k ? k = 3 ? 3, the 9 score maps encode the cases of {top-left, top-center,
top-right, ..., bottom-right} of an object category.
R-FCN ends with a position-sensitive RoI pooling layer. This layer aggregates the outputs of the
last convolutional layer and generates scores for each RoI. Unlike [9, 7], our position-sensitive RoI
layer conducts selective pooling, and each of the k ? k bin aggregates responses from only one score
map out of the bank of k ? k score maps. With end-to-end training, this RoI layer shepherds the last
convolutional layer to learn specialized position-sensitive score maps. Figure 1 illustrates this idea.
Figure 3 and 4 visualize an example. The details are introduced as follows.
Backbone architecture. The incarnation of R-FCN in this paper is based on ResNet-101 [10],
though other networks [11, 24] are applicable. ResNet-101 has 100 convolutional layers followed by
global average pooling and a 1000-class fc layer. We remove the average pooling layer and the fc
layer and only use the convolutional layers to compute feature maps. We use the ResNet-101 released
by the authors of [10], pre-trained on ImageNet [21]. The last convolutional block in ResNet-101 is
2048-d, and we attach a randomly initialized 1024-d 1?1 convolutional layer for reducing dimension
(to be precise, this increases the depth in Table 1 by 1). Then we apply the k 2 (C + 1)-channel
convolutional layer to generate score maps, as introduced next.
Position-sensitive score maps & Position-sensitive RoI pooling. To explicitly encode position
information into each RoI, we divide each RoI rectangle into k ? k bins by a regular grid. For an RoI
h
rectangle of a size w ? h, a bin is of a size ? w
k ? k [9, 7]. In our method, the last convolutional layer
2
is constructed to produce k score maps for each category. Inside the (i, j)-th bin (0 ? i, j ? k ? 1),
we define a position-sensitive RoI pooling operation that pools only over the (i, j)-th score map:
X
rc (i, j | ?) =
zi,j,c (x + x0 , y + y0 | ?)/n.
(1)
(x,y)?bin(i,j)
Here rc (i, j) is the pooled response in the (i, j)-th bin for the c-th category, zi,j,c is one score map
out of the k 2 (C + 1) score maps, (x0 , y0 ) denotes the top-left corner of an RoI, n is the number
of pixels in the bin, and ? denotes all learnable parameters of the network. The (i, j)-th bin spans
w
h
h
bi w
k c ? x < d(i + 1) k e and bj k c ? y < d(j + 1) k e. The operation of Eqn.(1) is illustrated in
Figure 1, where a color represents a pair of (i, j). Eqn.(1) performs average pooling (as we use
throughout this paper), but max pooling can be conducted as well.
3
The k 2 position-sensitive scores then vote on the RoI. In this paper we simply
P vote by averaging the
scores, producing a (C + 1)-dimensional vector for each RoI: rc (?) = i,j rc (i, j | ?). Then we
PC
compute the softmax responses across categories: sc (?) = erc (?) / c0 =0 erc0 (?) . They are used for
evaluating the cross-entropy loss during training and for ranking the RoIs during inference.
We further address bounding box regression [8, 7] in a similar way. Aside from the above k 2 (C +1)-d
convolutional layer, we append a sibling 4k 2 -d convolutional layer for bounding box regression. The
position-sensitive RoI pooling is performed on this bank of 4k 2 maps, producing a 4k 2 -d vector for
each RoI. Then it is aggregated into a 4-d vector by average voting. This 4-d vector parameterizes a
bounding box as t = (tx , ty , tw , th ) following the parameterization in [7]. We note that we perform
class-agnostic bounding box regression for simplicity, but the class-specific counterpart (i.e., with a
4k 2 C-d output layer) is applicable.
The concept of position-sensitive score maps is partially inspired by [3] that develops FCNs for
instance-level semantic segmentation. We further introduce the position-sensitive RoI pooling layer
that shepherds learning of the score maps for object detection. There is no learnable layer after
the RoI layer, enabling nearly cost-free region-wise computation and speeding up both training and
inference.
Training. With pre-computed region proposals, it is easy to end-to-end train the R-FCN architecture.
Following [7], our loss function defined on each RoI is the summation of the cross-entropy loss and
the box regression loss: L(s, tx,y,w,h ) = Lcls (sc? ) + ?[c? > 0]Lreg (t, t? ). Here c? is the RoI?s
ground-truth label (c? = 0 means background). Lcls (sc? ) = ? log(sc? ) is the cross-entropy loss
for classification, Lreg is the bounding box regression loss as defined in [7], and t? represents the
ground truth box. [c? > 0] is an indicator which equals to 1 if the argument is true and 0 otherwise.
We set the balance weight ? = 1 as in [7]. We define positive examples as the RoIs that have
intersection-over-union (IoU) overlap with a ground-truth box of at least 0.5, and negative otherwise.
It is easy for our method to adopt online hard example mining (OHEM) [23] during training. Our
negligible per-RoI computation enables nearly cost-free example mining. Assuming N proposals per
image, in the forward pass, we evaluate the loss of all N proposals. Then we sort all RoIs (positive
and negative) by loss and select B RoIs that have the highest loss. Backpropagation [12] is performed
based on the selected examples. Because our per-RoI computation is negligible, the forward time is
nearly not affected by N , in contrast to OHEM Fast R-CNN in [23] that may double training time.
We provide comprehensive timing statistics in Table 3 in the next section.
We use a weight decay of 0.0005 and a momentum of 0.9. By default we use single-scale training:
images are resized such that the scale (shorter side of image) is 600 pixels [7, 19]. Each GPU holds 1
image and selects B = 128 RoIs for backprop. We train the model with 8 GPUs (so the effective
mini-batch size is 8?). We fine-tune R-FCN using a learning rate of 0.001 for 20k mini-batches and
0.0001 for 10k mini-batches on VOC. To have R-FCN share features with RPN (Figure 2), we adopt
the 4-step alternating training3 in [19], alternating between training RPN and training R-FCN.
Inference. As illustrated in Figure 2, the feature maps shared between RPN and R-FCN are computed
(on an image with a single scale of 600). Then the RPN part proposes RoIs, on which the R-FCN
part evaluates category-wise scores and regresses bounding boxes. During inference we evaluate 300
RoIs as in [19] for fair comparisons. The results are post-processed by non-maximum suppression
(NMS) using a threshold of 0.3 IoU [8], as standard practice.
? trous and stride. Our fully convolutional architecture enjoys the benefits of the network modifications that are widely used by FCNs for semantic segmentation [16, 2]. Particularly, we reduce
ResNet-101?s effective stride from 32 pixels to 16 pixels, increasing the score map resolution. All
layers before and on the conv4 stage [10] (stride=16) are unchanged; the stride=2 operations in the
first conv5 block is modified to have stride=1, and all convolutional filters on the conv5 stage are
modified by the ?hole algorithm? [16, 2] (?Algorithme ? trous? [17]) to compensate for the reduced
stride. For fair comparisons, the RPN is computed on top of the conv4 stage (that are shared with
R-FCN), as is the case in [10] with Faster R-CNN, so the RPN is not affected by the ? trous trick.
The following table shows the ablation results of R-FCN (k ? k = 7 ? 7, no hard example mining).
The ? trous trick improves mAP by 2.6 points.
3
Although joint training [19] is applicable, it is not straightforward to perform example mining jointly.
4
vote
image and RoI
yes
position-sensitive
RoI-pool
position-sensitive score maps
Figure 3: Visualization of R-FCN (k ? k = 3 ? 3) for the person category.
vote
image and RoI
no
position-sensitive
RoI-pool
position-sensitive score maps
Figure 4: Visualization when an RoI does not correctly overlap the object.
R-FCN with ResNet-101 on:
conv4, stride=16
conv5, stride=32
conv5, ? trous, stride=16
mAP (%) on VOC 07 test
72.5
74.0
76.6
Visualization. In Figure 3 and 4 we visualize the position-sensitive score maps learned by R-FCN
when k ? k = 3 ? 3. These specialized maps are expected to be strongly activated at a specific
relative position of an object. For example, the ?top-center-sensitive? score map exhibits high scores
roughly near the top-center position of an object. If a candidate box precisely overlaps with a true
object (Figure 3), most of the k 2 bins in the RoI are strongly activated, and their voting leads to a high
score. On the contrary, if a candidate box does not correctly overlaps with a true object (Figure 4),
some of the k 2 bins in the RoI are not activated, and the voting score is low.
3
Related Work
R-CNN [8] has demonstrated the effectiveness of using region proposals [28, 29] with deep networks.
R-CNN evaluates convolutional networks on cropped and warped regions, and computation is not
shared among regions (Table 1). SPPnet [9], Fast R-CNN [7], and Faster R-CNN [19] are ?semiconvolutional?, in which a convolutional subnetwork performs shared computation on the entire
image and another subnetwork evaluates individual regions.
There have been object detectors that can be thought of as ?fully convolutional? models. OverFeat [22]
detects objects by sliding multi-scale windows on the shared convolutional feature maps; similarly, in
Fast R-CNN [7] and [13], sliding windows that replace region proposals are investigated. In these
cases, one can recast a sliding window of a single scale as a single convolutional layer. The RPN
component in Faster R-CNN [19] is a fully convolutional detector that predicts bounding boxes with
respect to reference boxes (anchors) of multiple sizes. The original RPN is class-agnostic in [19], but
its class-specific counterpart is applicable (see also [15]) as we evaluate in the following.
5
Table 2: Comparisons among fully convolutional (or ?almost? fully convolutional) strategies using
ResNet-101. All competitors in this table use the ? trous trick. Hard example mining is not conducted.
method
na?ve Faster R-CNN
RoI output size (k ? k)
mAP on VOC 07 (%)
1?1
7?7
61.7
68.9
class-specific RPN
-
67.6
R-FCN (w/o position-sensitivity)
1?1
fail
R-FCN
3?3
7?7
75.5
76.6
Another family of object detectors resort to fully-connected (fc) layers for generating holistic object
detection results on an entire image, such as [26, 4, 18].
4
4.1
Experiments
Experiments on PASCAL VOC
We perform experiments on PASCAL VOC [5] that has 20 object categories. We train the models on
the union set of VOC 2007 trainval and VOC 2012 trainval (?07+12?) following [7], and evaluate on
VOC 2007 test set. Object detection accuracy is measured by mean Average Precision (mAP).
Comparisons with Other Fully Convolutional Strategies
Though fully convolutional detectors are available, experiments show that it is nontrivial for them to
achieve good accuracy. We investigate the following fully convolutional strategies (or ?almost? fully
convolutional strategies that have only one classifier fc layer per RoI), using ResNet-101:
Na?ve Faster R-CNN. As discussed in the introduction, one may use all convolutional layers in
ResNet-101 to compute the shared feature maps, and adopt RoI pooling after the last convolutional
layer (after conv5). An inexpensive 21-class fc layer is evaluated on each RoI (so this variant is
?almost? fully convolutional). The ? trous trick is used for fair comparisons.
Class-specific RPN. This RPN is trained following [19], except that the 2-class (object or not)
convolutional classifier layer is replaced with a 21-class convolutional classifier layer. For fair
comparisons, for this class-specific RPN we use ResNet-101?s conv5 layers with the ? trous trick.
R-FCN without position-sensitivity. By setting k = 1 we remove the position-sensitivity of the
R-FCN. This is equivalent to global pooling within each RoI.
Analysis. Table 2 shows the results. We note that the standard (not na?ve) Faster R-CNN in the ResNet
paper [10] achieves 76.4% mAP with ResNet-101 (see also Table 3), which inserts the RoI pooling
layer between conv4 and conv5 [10]. As a comparison, the na?ve Faster R-CNN (that applies RoI
pooling after conv5) has a drastically lower mAP of 68.9% (Table 2). This comparison empirically
justifies the importance of respecting spatial information by inserting RoI pooling between layers for
the Faster R-CNN system. Similar observations are reported in [20].
The class-specific RPN has an mAP of 67.6% (Table 2), about 9 points lower than the standard
Faster R-CNN?s 76.4%. This comparison is in line with the observations in [7, 13] ? in fact, the
class-specific RPN is similar to a special form of Fast R-CNN [7] that uses dense sliding windows as
proposals, which shows inferior results as reported in [7, 13].
On the other hand, our R-FCN system has significantly better accuracy (Table 2). Its mAP (76.6%) is
on par with the standard Faster R-CNN?s (76.4%, Table 3). These results indicate that our positionsensitive strategy manages to encode useful spatial information for locating objects, without using
any learnable layer after RoI pooling.
The importance of position-sensitivity is further demonstrated by setting k = 1, for which R-FCN is
unable to converge. In this degraded case, no spatial information can be explicitly captured within
an RoI. Moreover, we report that na?ve Faster R-CNN is able to converge if its RoI pooling output
resolution is 1 ? 1, but the mAP further drops by a large margin to 61.7% (Table 2).
6
Table 3: Comparisons between Faster R-CNN and R-FCN using ResNet-101. Timing is evaluated on
a single Nvidia K40 GPU. With OHEM, N RoIs per image are computed in the forward pass, and
128 samples are selected for backpropagation. 300 RoIs are used for testing following [19].
depth of per-RoI
subnetwork
training
w/ OHEM?
train time
(sec/img)
test time
(sec/img)
mAP (%) on VOC07
1.2
0.45
0.42
0.17
76.4
76.6
Faster R-CNN
R-FCN
10
0
Faster R-CNN
R-FCN
10
0
X(300 RoIs)
X(300 RoIs)
1.5
0.45
0.42
0.17
79.3
79.5
Faster R-CNN
R-FCN
10
0
X(2000 RoIs)
X(2000 RoIs)
2.9
0.46
0.42
0.17
N/A
79.3
Table 4: Comparisons on PASCAL VOC 2007 test set using ResNet-101. ?Faster R-CNN +++? [10]
uses iterative box regression, context, and multi-scale testing.
training data
mAP (%)
test time (sec/img)
Faster R-CNN [10]
Faster R-CNN +++ [10]
07+12
07+12+COCO
76.4
85.6
0.42
3.36
R-FCN
R-FCN multi-sc train
R-FCN multi-sc train
07+12
07+12
07+12+COCO
79.5
80.5
83.6
0.17
0.17
0.17
Table 5: Comparisons on PASCAL VOC 2012 test set using ResNet-101. ?07++12? [7] denotes the
union set of 07 trainval+test and 12 trainval. ? : http://host.robots.ox.ac.uk:8080/anonymous/44L5HI.html ? :
http://host.robots.ox.ac.uk:8080/anonymous/MVCM2L.html
training data
mAP (%)
test time (sec/img)
Faster R-CNN [10]
Faster R-CNN +++ [10]
07++12
07++12+COCO
73.8
83.8
0.42
3.36
R-FCN multi-sc train
R-FCN multi-sc train
07++12
07++12+COCO
77.6?
82.0?
0.17
0.17
Comparisons with Faster R-CNN Using ResNet-101
Next we compare with standard ?Faster R-CNN + ResNet-101? [10] which is the strongest competitor
and the top-performer on the PASCAL VOC, MS COCO, and ImageNet benchmarks. We use
k ? k = 7 ? 7 in the following. Table 3 shows the comparisons. Faster R-CNN evaluates a 10-layer
subnetwork for each region to achieve good accuracy, but R-FCN has negligible per-region cost. With
300 RoIs at test time, Faster R-CNN takes 0.42s per image, 2.5? slower than our R-FCN that takes
0.17s per image (on a K40 GPU; this number is 0.11s on a Titan X GPU). R-FCN also trains faster
than Faster R-CNN. Moreover, hard example mining [23] adds no cost to R-FCN training (Table 3).
It is feasible to train R-FCN when mining from 2000 RoIs, in which case Faster R-CNN is 6? slower
(2.9s vs. 0.46s). But experiments show that mining from a larger set of candidates (e.g., 2000) has no
benefit (Table 3). So we use 300 RoIs for both training and inference in other parts of this paper.
Table 4 shows more comparisons. Following the multi-scale training in [9], we resize the image in
each training iteration such that the scale is randomly sampled from {400,500,600,700,800} pixels. We
still test a single scale of 600 pixels, so add no test-time cost. The mAP is 80.5%. In addition, we
train our model on the MS COCO [14] trainval set and then fine-tune it on the PASCAL VOC set.
R-FCN achieves 83.6% mAP (Table 4), close to the ?Faster R-CNN +++? system in [10] that uses
ResNet-101 as well. We note that our competitive result is obtained at a test speed of 0.17 seconds per
image, 20? faster than Faster R-CNN +++ that takes 3.36 seconds as it further incorporates iterative
box regression, context, and multi-scale testing [10]. These comparisons are also observed on the
PASCAL VOC 2012 test set (Table 5).
On the Impact of Depth
The following table shows the R-FCN results using ResNets of different depth [10], as well as the
VGG-16 model [24]. For VGG-16 model, the fc layers (fc6, fc7) are turned into sliding convolutional
layers, and a 1 ? 1 convolutional layer is applied on top to generate the position-sensitive score
7
maps. R-FCN with VGG-16 achieves slightly lower than that of ResNet-50. Our detection accuracy
increases when the depth is increased from 50 to 101 in ResNet, but gets saturated with a depth of
152.
R-FCN
R-FCN multi-sc train
training data
test data
07+12
07+12
07
07
VGG-16
ResNet-50
ResNet-101
ResNet-152
75.6
76.5
77.0
78.7
79.5
80.5
79.6
80.4
On the Impact of Region Proposals
R-FCN can be easily applied with other region proposal methods, such as Selective Search (SS) [28]
and Edge Boxes (EB) [29]. The following table shows the results (using ResNet-101) with different
proposals. R-FCN performs competitively using SS or EB, showing the generality of our method.
R-FCN
4.2
training data
test data
07+12
07
RPN [19]
SS [28]
EB [29]
79.5
77.2
77.8
Experiments on MS COCO
Next we evaluate on the MS COCO dataset [14] that has 80 object categories. Our experiments
involve the 80k train set, 40k val set, and 20k test-dev set. We set the learning rate as 0.001 for 90k
iterations and 0.0001 for next 30k iterations, with an effective mini-batch size of 8. We extend the
alternating training [19] from 4-step to 5-step (i.e., stopping after one more RPN training step), which
slightly improves accuracy on this dataset when the features are shared; we also report that 2-step
training is sufficient to achieve comparably good accuracy but the features are not shared.
The results are in Table 6. Our single-scale trained R-FCN baseline has a val result of 48.9%/27.6%.
This is comparable to the Faster R-CNN baseline (48.4%/27.2%), but ours is 2.5? faster testing.
It is noteworthy that our method performs better on objects of small sizes (defined by [14]). Our
multi-scale trained (yet single-scale tested) R-FCN has a result of 49.1%/27.8% on the val set and
51.5%/29.2% on the test-dev set. Considering COCO?s wide range of object scales, we further
evaluate a multi-scale testing variant following [10], and use testing scales of {200,400,600,800,1000}.
The mAP is 53.2%/31.5%. This result is close to the 1st-place result (Faster R-CNN +++ with
ResNet-101, 55.7%/34.9%) in the MS COCO 2015 competition. Nevertheless, our method is simpler
and adds no bells and whistles such as context or iterative box regression that were used by [10], and
is faster for both training and testing.
Table 6: Comparisons on MS COCO dataset using ResNet-101. The COCO-style AP is evaluated @
IoU ? [0.5, 0.95]. AP@0.5 is the PASCAL-style AP evaluated @ IoU = 0.5.
5
training
data
test
data
AP@0.5
AP
AP
small
AP
medium
AP
large
test time
(sec/img)
Faster R-CNN [10]
R-FCN
R-FCN multi-sc train
train
train
train
val
val
val
48.4
48.9
49.1
27.2
27.6
27.8
6.6
8.9
8.8
28.6
30.5
30.8
45.0
42.0
42.2
0.42
0.17
0.17
Faster R-CNN +++ [10]
R-FCN
R-FCN multi-sc train
R-FCN multi-sc train, test
trainval
trainval
trainval
trainval
test-dev
test-dev
test-dev
test-dev
55.7
51.5
51.9
53.2
34.9
29.2
29.9
31.5
15.6
10.3
10.8
14.3
38.7
32.4
32.8
35.5
50.9
43.3
45.0
44.2
3.36
0.17
0.17
1.00
Conclusion and Future Work
We presented Region-based Fully Convolutional Networks, a simple but accurate and efficient
framework for object detection. Our system naturally adopts the state-of-the-art image classification
backbones, such as ResNets, that are by design fully convolutional. Our method achieves accuracy
competitive with the Faster R-CNN counterpart, but is much faster during both training and inference.
We intentionally keep the R-FCN system presented in the paper simple. There have been a series
of orthogonal extensions of FCNs that were developed for semantic segmentation (e.g., see [2]), as
well as extensions of region-based methods for object detection (e.g., see [10, 1, 23]). We expect our
system will easily enjoy the benefits of the progress in the field.
8
References
[1] S. Bell, C. L. Zitnick, K. Bala, and R. Girshick. Inside-outside net: Detecting objects in context with skip
pooling and recurrent neural networks. In CVPR, 2016.
[2] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with
deep convolutional nets and fully connected crfs. In ICLR, 2015.
[3] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. arXiv:1603.08678,
2016.
[4] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks.
In CVPR, 2014.
[5] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes (VOC) Challenge. IJCV, 2010.
[6] S. Gidaris and N. Komodakis. Object detection via a multi-region & semantic segmentation-aware cnn
model. In ICCV, 2015.
[7] R. Girshick. Fast R-CNN. In ICCV, 2015.
[8] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014.
[9] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual
recognition. In ECCV. 2014.
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016.
[11] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[12] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989.
[13] K. Lenc and A. Vedaldi. R-CNN minus R. In BMVC, 2015.
[14] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll?r, and C. L. Zitnick. Microsoft
COCO: Common objects in context. In ECCV, 2014.
[15] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. SSD: Single shot multibox detector.
arXiv:1512.02325v2, 2015.
[16] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR,
2015.
[17] S. Mallat. A wavelet tour of signal processing. Academic press, 1999.
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection.
In CVPR, 2016.
[19] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region
proposal networks. In NIPS, 2015.
[20] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature
maps. arXiv:1504.06066, 2015.
[21] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV,
2015.
[22] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition,
localization and detection using convolutional networks. In ICLR, 2014.
[23] A. Shrivastava, A. Gupta, and R. Girshick. Training region-based object detectors with online hard example
mining. In CVPR, 2016.
[24] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[25] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, and A. Rabinovich. Going deeper
with convolutions. In CVPR, 2015.
[26] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013.
[27] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for
computer vision. In CVPR, 2016.
[28] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders. Selective search for object recognition.
IJCV, 2013.
[29] C. L. Zitnick and P. Doll?r. Edge boxes: Locating object proposals from edges. In ECCV, 2014.
9
| 6465 |@word cnn:50 fcns:3 kokkinos:1 everingham:1 c0:1 decomposition:1 minus:1 incarnation:1 shot:1 liu:2 series:1 score:34 trainval:9 ours:2 com:2 yet:2 gpu:4 enables:1 lcls:2 hypothesize:1 designed:1 remove:2 drop:1 rpn:19 v:2 aside:1 selected:2 parameterization:1 detecting:1 simpler:1 zhang:4 rc:4 constructed:1 consists:2 ijcv:3 inside:4 introduce:1 x0:2 sacrifice:1 expected:1 roughly:1 multi:15 whistle:1 inspired:1 voc:16 detects:1 window:4 considering:1 increasing:2 conv:5 spain:1 farhadi:1 moreover:2 agnostic:2 alexnet:1 medium:1 backbone:4 developed:1 unified:1 voting:3 preferable:1 classifier:5 uk:2 ramanan:1 enjoy:1 producing:2 before:1 positive:2 negligible:4 timing:2 tsinghua:1 noteworthy:1 ap:8 eb:3 bi:1 range:1 lecun:2 testing:8 union:3 block:2 practice:1 backpropagation:3 maire:1 bell:2 thought:1 significantly:1 vedaldi:1 pre:2 regular:1 get:1 close:2 context:5 equivalent:1 map:54 demonstrated:2 center:4 crfs:1 latest:1 straightforward:1 williams:1 conv4:4 resolution:2 simplicity:1 shlens:1 hierarchy:1 mallat:1 unshared:1 us:3 trick:5 recognition:7 particularly:1 predicts:1 bottom:2 inserted:1 observed:1 region:36 connected:4 sun:6 k40:2 removed:1 highest:1 respecting:2 trained:4 dilemma:4 creates:1 localization:2 efficiency:1 yuille:1 easily:2 joint:1 tx:2 train:19 fast:6 effective:3 sc:12 aggregate:2 outside:1 widely:1 larger:1 cvpr:9 s:3 otherwise:2 favor:1 statistic:1 simonyan:1 jointly:1 itself:1 online:2 net:5 propose:1 inserting:1 turned:2 ablation:1 holistic:1 achieve:4 competition:1 sutskever:1 double:1 darrell:2 produce:3 generating:1 object:54 resnet:33 develop:1 ac:2 recurrent:1 measured:1 progress:1 conv5:8 skip:1 indicate:1 iou:4 filter:1 bin:11 backprop:1 anonymous:2 summation:1 insert:2 extension:2 hold:1 ground:3 roi:75 bj:1 visualize:2 achieves:4 adopt:5 released:1 applicable:4 label:1 jackel:1 sensitive:25 hubbard:1 modified:2 resized:1 encode:4 prevalent:1 contrast:2 suppression:1 baseline:2 inference:6 stopping:1 entire:7 voc07:1 integrated:1 hidden:1 perona:1 selective:3 going:1 selects:1 pixel:6 issue:1 classification:13 aforementioned:1 pascal:11 overall:1 among:3 html:2 proposes:2 overfeat:2 spatial:9 art:2 softmax:2 special:1 equal:1 construct:2 field:1 aware:1 once:1 represents:2 look:1 nearly:3 fcn:64 future:1 report:2 develops:1 randomly:2 resulted:1 ve:6 comprehensive:1 individual:1 murphy:1 replaced:2 microsoft:5 detection:30 interest:1 mining:9 investigate:1 saturated:1 henderson:1 introduces:1 pc:1 activated:3 accurate:3 edge:3 shorter:1 orthogonal:1 conduct:1 divide:1 initialized:1 girshick:7 instance:2 classify:1 increased:1 dev:6 papandreou:1 rabinovich:1 cost:7 tour:1 hundred:1 krizhevsky:1 conducted:2 reported:2 considerably:1 person:1 st:1 sensitivity:4 pool:5 na:6 nm:1 huang:1 corner:1 warped:1 resort:1 leading:2 style:2 li:3 szegedy:5 converted:1 de:1 stride:9 pooled:1 sec:5 titan:1 caused:1 explicitly:2 ranking:1 performed:3 break:1 competitive:4 sort:1 gevers:1 jia:1 smeulders:1 publicly:2 accuracy:12 convolutional:66 multibox:1 variance:4 degraded:1 yield:1 correspond:1 yes:1 handwritten:1 manages:2 comparably:1 ren:5 russakovsky:1 detector:12 strongest:1 zwe:1 inexpensive:1 evaluates:4 ty:1 competitor:2 intentionally:1 regress:1 naturally:3 sampled:1 dataset:3 popular:1 color:2 improves:3 segmentation:7 lreg:2 appears:1 methodology:2 response:4 zisserman:2 bmvc:1 done:1 box:20 evaluated:5 though:2 strongly:2 ox:2 stage:4 generality:1 inception:1 hand:3 eqn:2 su:1 concept:1 true:3 remedy:1 counterpart:5 alternating:3 semantic:7 illustrated:2 komodakis:1 during:5 inferior:2 m:8 demonstrate:1 performs:4 image:34 wise:7 superior:1 common:1 specialized:3 empirically:2 overview:2 discussed:1 he:6 extend:1 anguelov:3 tuning:1 grid:2 similarly:1 erc:1 ssd:1 robot:2 longer:1 add:3 fc7:1 recent:1 coco:13 nvidia:1 hay:1 sande:1 yi:2 captured:1 dai:2 performer:1 zip:1 deng:1 aggregated:1 converge:2 signal:1 ii:2 sliding:5 multiple:1 faster:46 match:1 academic:1 cross:3 compensate:1 lin:1 long:1 divided:1 host:2 post:2 impact:2 variant:3 regression:8 scalable:1 vision:1 arxiv:3 resnets:5 iteration:3 pyramid:1 achieved:2 proposal:16 background:3 cropped:1 fine:3 addition:1 winn:1 krause:1 jian:1 leaving:1 lenc:1 unlike:1 posse:1 shepherd:3 pooling:27 contrary:1 incorporates:1 effectiveness:1 near:1 bernstein:1 easy:2 zi:2 architecture:12 reduce:1 idea:3 parameterizes:1 vgg:5 sibling:1 shift:1 unnatural:1 locating:2 deep:10 useful:1 involve:1 tune:2 karpathy:1 processed:1 category:11 reduced:1 http:4 generate:2 exist:1 per:15 correctly:2 affected:2 key:2 threshold:1 nevertheless:1 rectangle:2 you:1 place:1 almost:4 family:2 throughout:1 resize:1 comparable:1 layer:61 followed:2 bala:1 nontrivial:1 precisely:1 fei:2 encodes:1 generates:1 toshev:2 speed:4 argument:1 span:1 gpus:1 across:2 slightly:2 y0:2 tw:1 modification:1 invariant:2 iccv:2 pipeline:1 visualization:3 turn:1 describing:2 fail:1 end:7 subnetworks:2 available:3 operation:4 doll:2 competitively:1 apply:2 denker:1 v2:1 batch:4 slower:2 eigen:1 original:1 top:12 denotes:3 unchanged:1 malik:1 strategy:6 costly:1 subnetwork:13 exhibit:1 iclr:3 unable:1 rethinking:1 unnaturally:1 argue:1 extent:1 assuming:1 code:3 reed:2 illustration:1 mini:4 balance:1 sermanet:2 negative:2 append:2 wojna:1 design:5 satheesh:1 perform:3 convolution:2 observation:2 datasets:1 benchmark:2 enabling:1 howard:1 hinton:1 precise:1 introduced:2 evidenced:1 pair:1 required:1 imagenet:5 learned:2 boser:1 barcelona:1 nip:4 address:4 able:1 challenge:2 pioneering:1 recast:1 max:1 gool:1 overlap:5 natural:1 rely:1 attach:1 indicator:1 residual:4 github:2 historically:1 mathieu:1 extract:1 speeding:1 val:6 relative:3 fully:31 loss:9 par:1 expect:1 analogy:1 shelhamer:1 vanhoucke:1 sufficient:1 bank:5 share:3 translation:13 eccv:3 last:8 free:2 enjoys:1 drastically:1 side:1 divvala:1 deeper:3 wide:1 benefit:3 van:2 depth:8 dimension:1 ending:1 evaluating:1 default:1 rich:1 author:1 made:2 forward:3 adopts:1 erhan:4 keep:1 global:2 anchor:1 ioffe:1 img:5 belongie:1 training3:1 fergus:1 search:2 iterative:3 khosla:1 table:29 channel:2 learn:1 fc6:1 shrivastava:1 investigated:2 meanwhile:2 uijlings:1 zitnick:3 dense:1 bounding:7 fair:4 precision:1 position:32 momentum:1 candidate:7 wavelet:1 donahue:1 down:1 specific:9 showing:1 learnable:6 decay:1 gupta:1 consist:1 effectively:1 importance:2 illustrates:2 justifies:1 hole:1 margin:1 chen:1 entropy:3 intersection:1 fc:8 simply:1 intern:1 visual:3 kaiming:1 partially:1 applies:1 truth:3 ma:1 goal:1 marked:1 towards:1 shared:14 replace:1 considerable:1 hard:5 feasible:1 except:1 reducing:1 redmon:1 averaging:1 called:1 pas:2 invariance:5 vote:6 meaningful:1 select:1 berg:1 incorporate:1 evaluate:6 tested:1 |
6,042 | 6,466 | Bayesian optimization for automated model selection
Gustavo Malkomes,? Chip Schaff,? Roman Garnett
Department of Computer Science and Engineering
Washington University in St. Louis
St. Louis, MO 63130
{luizgustavo, cbschaff, garnett}@wustl.edu
Abstract
Despite the success of kernel-based nonparametric methods, kernel selection still
requires considerable expertise, and is often described as a ?black art.? We present a
sophisticated method for automatically searching for an appropriate kernel from an
infinite space of potential choices. Previous efforts in this direction have focused on
traversing a kernel grammar, only examining the data via computation of marginal
likelihood. Our proposed search method is based on Bayesian optimization in
model space, where we reason about model evidence as a function to be maximized.
We explicitly reason about the data distribution and how it induces similarity
between potential model choices in terms of the explanations they can offer for
observed data. In this light, we construct a novel kernel between models to explain
a given dataset. Our method is capable of finding a model that explains a given
dataset well without any human assistance, often with fewer computations of model
evidence than previous approaches, a claim we demonstrate empirically.
1
Introduction
Over the past decades, enormous human effort has been devoted to machine learning; preprocessing
data, model selection, and hyperparameter optimization are some examples of critical and often
expert-dependent tasks. The complexity of these tasks has in some cases relegated them to the realm
of ?black art.? In kernel methods in particular, the selection of an appropriate kernel to explain
a given dataset is critical to success in terms of the fidelity of predictions, but the vast space of
potential kernels renders the problem nontrivial. We consider the problem of automatically finding
an appropriate probabilistic model to explain a given dataset. Although our proposed algorithm is
general, we will focus on the case where a model can be completely specified by a kernel, as is the
case for example for centered Gaussian processes (GPs).
Recent work has begun to tackle the kernel-selection problem in a systematic way. Duvenaud et al.
[1] and Grosse et al. [2] described generative grammars for enumerating a countably infinite space of
arbitrarily complex kernels via exploiting the closure of kernels under additive and multiplicative
composition. We adopt this kernel grammar in this work as well. Given a dataset, Duvenaud et al.
[1] proposed searching this infinite space of models using a greedy search mechanism. Beginning
at the root of the grammar, we traverse the tree greedily attempting to maximize the (approximate)
evidence for the data given by a GP model incorporating the kernel.
In this work, we develop a more sophisticated mechanism for searching through this space. The
greedy search described above only considers a given dataset by querying a model?s evidence. Our
search performs a metalearning procedure, which, conditional on a dataset, establishes similarities
among the models in terms of the space of explanations they can offer for the data. With this
viewpoint, we construct a novel kernel between models (a ?kernel kernel?). We then approach
?
These authors contributed equally to this work
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the model-search problem via Bayesian optimization, treating the model evidence as an expensive
black-box function to be optimized as a function of the kernel. The dependence of our kernel between
models on the distribution of the data is critical; depending on a given dataset, the kernels generated
by a compositional grammar could be especially rich or deceptively so.
We develop an automatic framework for exploring a set of potential models, seeking the model that
best explains a given dataset. Although we focus on Gaussian process models defined by a grammar,
our method could be easily extended to any probabilistic model with a parametric or structured model
space. Our search appears to perform competitively with other baselines across a variety of datasets,
including the greedy method from [1], especially in terms of the number of models for which we
must compute the (expensive) evidence, which typically scales cubically for kernel methods.
2
Related work
There are several works attempting to create more expressive kernels, either by combining kernels or
designing custom ones. Multiple kernel learning approaches, for instance, construct a kernel for a
given dataset through a weighted sum of predefined and fixed set of kernels, adjusting the weights to
best explain the observed data. Besides limiting the space of kernels considered, the hyperparameters
of component kernels also need to be specified in advance [3, 4]. Another approach to is to design
flexible kernel families [5?7]. These methods often use Bochner?s theorem to reason in spectral space,
and can approximate any arbitrary stationary kernel function. In contrast, our method does not depend
on stationarity. Other work has developed expressive kernels by combining Gaussian processes with
deep belief networks; see, for example, [8?10]. Unfortunately, there is no free lunch; these methods
require complicated inference techniques that are much more costly than using standard kernels.
The goal of automated machine learning (autoML) is to automate complex machine-learning procedures using insights and techniques from other areas of machine learning. Our work falls into this
broad category of research. By applying machine learning methods throughout the entire modeling
process, it is possible to create more automated and, eventually, better systems. Bergstra et al. [11]
and Snoek et al. [12], for instance, have shown how to use modern optimization tools such as Bayesian
optimization to set the hyperparameters of machine learning methods (e.g., deep neural networks
and structured SVMs). Our approach to model search is also based on Bayesian optimization, and its
success in similar settings is encouraging for our adoption here. Gardner et al. [13] also considered
the automated model selection problem, but in an active leaning framework with a fixed set of models.
We note that our method could be adopted to their Bayesian active model selection framework with
minor changes, but we focus on the classical supervised learning case with a fixed training set.
3
Bayesian optimization for model search
Suppose we face a classical supervised learning problem defined on an input space X and output
space Y. We are given a set of training observations D = (X, y), where X represents the design
matrix of explanatory variables xi ? X , and yi ? Y is the respective value or label to be predicted.
Ultimately, we want to use D to predict the value y? associated with an unseen point x? . Given a
probabilistic model M, we may accomplish this via formation of the predictive distribution.
Suppose, however, that we are given a collection of probabilistic models M that could have plausibly
generated the data. Ideally, finding the source of D would let us solve our prediction task with the
highest fidelity. Let M ? M be a probabilistic model, and let ?M be the corresponding parameter
space. These models are typically parametric families of distributions, each of which encodes a
structural assumption about the data, for example, that the data can be described by a linear, quadratic,
or periodic trend. Further, the member distributions (M? ? M, ? ? ?M ) of M differ from each
other by a particular value of some properties?represented by the hyperprameters ??related to the
data such as amplitude, characteristic length scales, etc.
We wish to select one model from this collection of models M to explain D. From a Bayesian
perspective, the principle approach for solving this problem is Bayesian model selection.2 The critical
2
?Model selection? is unfortunately sometimes also used in GP literature for the process of hyperparameter
learning (selecting some M? ? M), rather than selecting a model class M, the focus of our work.
2
value is model evidence, the probability of generating the observed data given a model M:
Z
p(y | X, M) =
p(y | X, ?, M) p(? | M) d?.
(1)
?M
The evidence (also called marginal likelihood) integrates over ? to account for all possible explanations of the data offered by the model, under a prior p(? | M) associated with that model.
Our goal is to automatically explore a space of models M to select a model3 M? ? M that explains a
given dataset D as well as possible, according to the model evidence. The essence of our method,
which we call Bayesian optimization for model search (BOMS), is viewing the evidence as a function
g : M ? R to be optimized. We note two important aspects of g. First, for large datasets and/or
complex models, g is an expensive function, for example growing cubically with |D| for GP models.
Further, gradient information about g is impossible to compute due to the discrete nature of M. We
can, however, query a model?s evidence as a black-box function. For these reasons, we propose
to optimize evidence over M using Bayesian optimization, a technique well-suited for optimizing
expensive, gradient-free, black-box objectives [14]. In this framework, we seek an optimal model
M? = arg max g(M; D),
where g(M; D) is the (log) model evidence:
(2)
M?M
g(M; D) = log p(y | X, M).
(3)
We begin by placing a Gaussian process (GP) prior on g,
p(g) = GP(g; ?g , Kg ),
where ?g : M ? R is a mean function and Kg : M2 ? R is a covariance function appropriately
defined over the model space M. This is a nontrivial task due to the discrete and potentially complex
nature of M. We will suggest useful choices for ?g and Kg when M is a space of Gaussian process
models below. Now, given observations of the evidence of a selected set of models,
Dg = Mi , g(Mi ; D) ,
(4)
we may compute the posterior distribution on g conditioned on Dg , which will be an updated Gaussian
process [15]. Bayesian optimization uses this probabilistic belief about g to induce an inexpensive
acquisition function to select which model we should select to evaluate next. Here we use the
classical expected improvement (EI) [16] acquisition function, or a slight variation described below,
because it naturally considers the trade off between exploration and exploitation. The exact choice
of acquisition function, however, is not critical to our proposal. In each round of our model search,
we will evaluate the acquisition function in the optimal model evidence for a number of candidate
models C(Dg ) = {Mi }, and compute the evidence of the candidate where this is maximized:
M0 = arg max ?EI (M; Dg ).
M?C
We then incorporate the chosen model M and the observed model evidence g(M0 ; D) into our
model evidence training set Dg , update the posterior on g, select a new set of candidates, and continue.
We repeat this iterative procedure until a budget is expended, typically measured in terms of the
number of models considered.
0
We have observed that expected improvement [16] works well especially for small and/or lowdimensional problems. When the dataset is large and/or high-dimensional, training costs can be
considerable and variable, especially for complex models. To give better anytime performance on
such datasets, we use expected improvement per second, where we divide the expected improvement
by an estimate of the time required to compute the evidence. In our experiments, this estimation was
performed by fitting a linear regression model to the log time to compute g(M; D) as a function of
the number of hyperparameters (the dimension of ?M ) that we train on the models available in Dg .
The acquisition function allows us to quickly determine which models are more promising than
others, given the evidence we have observed so far. Since M is an infinite set of models, we cannot
consider every model in every round. Instead, we will define a heuristic to evaluate the acquisition
function at a smaller set of active candidate models below.
3
We could also select a set of models but, for simplicity, we assume that there is one model that best explains
that data with overwhelming probability, which would imply that there is not benefit in considering more than
one model, e.g., via Bayesian model averaging.
3
4
Bayesian optimization for Gaussian process kernel search
We introduced above a general framework for searching over a space of probabilistic models M
to explain a dataset D without making further assumptions about the nature of the models. In the
following, we will provide specific suggestions in the case that all members of M are Gaussian
process priors on a latent function.
We assume that our observations y were generated according to an unknown function f : X ? R
via a fixed probabilistic observation mechanism p(y | f ), where fi = f (xi ). In our experiments
here, we will consider regression with additive Gaussian observation noise, but this is not integral
to our approach. We further assume a GP prior distribution on f , p(f ) = GP(f ; ?f , Kf ), where
?f : X ? R is a mean function and Kf : X 2 ? R is a positive-definite covariance function or
kernel. For simplicity, we will assume that the prior on f is centered, ?f (x) = 0, which lets us fully
define the prior on f by the kernel function Kf . We assume that the kernel function is parameterized
by hyperparameters that we concatenate into a vector ?. In this restricted context, a model M is
completely determined by the choice of kernel function and an associated hyperparameter prior
p(? | M). Below we briefly review a previously suggested method for constructing an infinite space
of potential kernels to model the latent function f , and thus an infinite family of models M. We will
the discuss the standardized and automated construction of associated hyperparameter priors.
4.1
Space of compositional Gaussian processes kernels
We adopt the same space of kernels defined by Duvenaud et al. [1], which we briefly summarize here.
We refer the reader to the original paper for more details. Given a set of simple, so-called base kernels,
such as the common squared exponential (SE), periodic (PER), linear (LIN), and rational quadratic
(RQ) kernels, we create new and potentially complex kernels by summation and multiplication of
these base units. The entire kernel space can be describe by the following grammar rules:
1. Any subexpression S can be replaced with S + B, where B is a base kernel.
2. Any subexpression S can be replaced with S ? B, where B is a base kernel.
3. Any base kernel B may be replaced with another base kernel B 0 .
4.2
Creating hyperparameter priors
The base kernels we will use are well understood, as are their hyperparameters, which have simple
interpretations that can be thematically grouped together. We take advantage of the Bayesian
framework to encode prior knowledge over hyperparameters, i.e., p(? | M). Conveniently, these
priors can also potentially mitigate numerical problems during the training of the GPs. Here we derive
a consistent method to construct such priors for arbitrary kernels and datasets in regression problems.
We first standardize the dataset, i.e., we subtract the mean and divide by the standard deviation of
both the predictive features {xi } and the outputs y. This gives each dataset a consistent scale. Now
we can reason about what real-world datasets usually look like in this scale. For example, we do
not typically expect to see datasets spanning 10 000 length scales. Here we encode what we judge
to be reasonable priors for groups of thematically related hyperparameters for most datasets. These
include three types of hyperparameters common to virtually any problem: length scales ` (including,
for example, the period parameter of a periodic covariance), signal variance ?, and observation noise
?n . We also consider separately three other parameters specific to particular covariances we use here:
the ? parameter of the rational quadratic covariance [15, (4.19)], the ?length scale? of the periodic
covariance `p [15, ` in (4.31)], and the offset ?0 in the linear covariance. We define the following:
p(log `) = N (0.1, 0.72 )
p(log ?) = N (0.4, 0.72 )
p(log ?n ) = N (0.1, 12 )
p(log ?) = N (0.05, 0.72 )
p(log `p ) = N (2, 0.72 )
p(?0 ) = N (0, 22 )
Given these, each model was given an independent prior over each of its hyperparameters, using the
appropriate selection from the above for each.
4.3
Approximating the model evidence
The model evidence p(y | X, M) is in general intractable for GPs [17, 15]. Alternatively we use a
Laplace approximation to approximately compute the model evidence. This approximation works by
4
making a second-order Taylor expansion of log p(? | D, M) around its mode ?? and approximates the
model evidence as follows:
? M) + log p(?? | M) ? 1 log det ??1 + d log 2?,
log p(y | X, M) ? log p(y | X, ?,
(5)
2
2
where d is the dimension of ? and ??1 = ??2 log p(? | D, M)?=?? [18, 19]. We can view (5) as
rewarding model fit while penalizing model complexity. Note that the Bayesian information criterion
(BIC), commonly used for model selection and also used by Duvenaud et al. [1], can be seen as an
approximation to the Laplace approximation [20, 21].
4.4
Creating a ?kernel kernel?
In ?4.1, ?4.2, and ?4.3, we focused on modeling a latent function f with a GP, creating an infinite
space of models M to explain f (along with associated hyperparameter priors), and approximating
the log model evidence function g(M; D). The evidence function g is the objective function we are
trying to optimize via Bayesian optimization. We described in ?3 how this search progresses in the
general case, described in terms of an arbitrary Gaussian process prior on g. Here we will provide
specific suggestions for the modeling of g in the case that the model family M comprises Gaussian
process priors on a latent function f , as discussed here and considered in our experiments.
Our prior belief about g is given by a GP prior p(g) = GP(g; ?g , Kg ), which is fully specified by
the mean ?g and covariance functions Kg . We define the former as a simple constant mean function
?g (M) = ?? , where ?? is a hyperparameter to be learned through a regular GP training procedure
given a set of observations. The latter we will construct as follows.
The basic idea in our construction is that is that we will consider the distribution of the observation
locations in our dataset D, X (the design matrix of the underlying problem). We note that selecting a
model class M induces a prior distribution over the latent function values at X, p(f | X, M):
Z
p(f | X, M) = p(f | X, M, ?) p(? | M) d?.
This prior distribution is an infinite mixture of multivariate Gaussian prior distributions, each conditioned on specific hyperparameters ?. We consider these prior distributions as different explanations
of the latent function f , restricted to the observed locations, offered by the model M. We will
compare two models in M according to how different the explanations they offer for f are, a priori.
The Hellinger distance is a probability metric that we adopt as a basic measure of similarity between
two distributions. Although this quantity is defined between arbitrary probability distributions (and
thus could be used with non-GP model spaces), we focus on the multivariate normal case. Suppose
that M, M0 ? M are two models that we wish to compare, in the context of explaining a fixed dataset
D. For now, suppose that we have conditioned each of these models on arbitrary hyperparameters
(that is, we select a particular prior for f from each of these two families), giving M? and M0?0 , with
? ? ?M and ?0 ? ?M0 . Now, we define the two distributions
Q = p(f | X, M0 , ?0 ) = N (f ; ?Q , ?Q ).
P = p(f | X, M, ?) = N (f ; ?P , ?P )
The squared Hellinger distance between P and Q is
?1
|?P |1/4 |?Q |1/4
1
> ?P + ? Q
(?
?
?
)
(?
?
?
)
.
d2H (P, Q) = 1 ?
exp
?
P
Q
P
Q
8
2
?P +?Q 1/2
(6)
2
The Hellinger distance will be small when P and Q are highly overlapping, and thus M? and M0?0
provide similar explanations for this dataset. The distance will be larger, conversely, when M? and
M0?0 provide divergent explanations. Critically, we note that this distance depends on the dataset
under consideration in addition to the GP priors.
Observe that the distance above is not sufficient to compare the similarity of two models M, M0
due to the fixing of hyperparameters above. To properly account for the different hyperparameters
of different models, and the priors associated with them, we define the expected squared Hellinger
distance of two models M, M0 ? M as
ZZ
2
2
0
0
?
dH (M, M ; X) = E dH (M? , M?0 ) =
d2H (M? , M0?0 ; X) p(? | M) p(?0 | M0 ) d? d?0 , (7)
5
SE
PER
SE
RQ
PER
SE +
PER
SE
RQ
RQ
SE+ PER
PER
SE +
PER
Figure 1: A demonstration of our model kernel Kg (8) based on expected Hellinger distance of
induced latent priors. Left: four simple model classes on a 1d domain, showing samples from the
prior p(f | M) ? p(f | ?, M) p(? | M). Right: our Hellinger squared exponential covariance
evaluated for the grid domains on the left. Increasing intensity indicates stronger covariance. The
sets {SE, RQ} and {SE, PER, SE + PER} show strong mutual correlation.
where the distance is understood to be evaluated between the priors provided on f induced at X.
Finally, we construct the Hellinger squared exponential covariance between models as
1 d?2 (M, M0 ; X)
Kg (M, M0 ; ?g , X) = ? 2 exp ? H
,
(8)
2
`2
where ?g = (?, `) specifies output and length scale hyperparameters in this kernel/evidence space.
This covariance is illustrated in Figure 1 for a few simple kernels on a fictitious domain.
We make two notes before continuing. The first observation is that computing (6) scales cubically
with |X|, so it might appear that we might as well compute the evidence instead. This is misleading
for two reasons. First, the (approximate) computation of a given model?s evidence via either a Laplace
approximation or the BIC requires optimizing its hyperparameters. Especially for complex models
this can require hundreds-to-thousands of computations that each require cubic time. Further, as
a result of our investigations, we have concluded that in practice we may approximate (6) and (7)
by considering only a small subset of the observation locations X and that this usually sufficient to
capture the similarity between models in terms of explaining a given dataset. In our experiments, we
choose 20 points uniformly at random from those available in each dataset, fixed once for the entire
procedure and for all kernels under consideration in the search. We then used these points to compute
distances (6?8), significantly reducing the overall time to compute Kg .
Second, we note that the expectation in (7) is intractable. Here we approximate the expectation
via quasi-Monte Carlo, using a low-discrepancy sequence (a Sobol sequence) of the appropriate
dimension, and inverse transform sampling, to give consistent, representative samples from the
hyperparameter space of each model. Here we used 100 (?, ?0 ) samples with good results.
4.5
Active set of candidate models
Another challenging of exploring an infinite set of models is how we advance the search. Each
round, we only compute the acquisition function on a set of candidate models C. Here we discuss
our policy for creating and maintaining this set. From the kernel grammar (?4.1), we can define a
model graph where two models are connected if we can apply one rule to produce the other. We seek
to traverse this graph, balancing exploration (diversity) against exploitation (models likely to have
higher evidence). We begin each round with a set of already chosen candidates C. To encourage
exploitation, we add to C all neighbors of the best model seen thus far. To encourage exploration, we
perform random walks to create diverse models, which we also add to C. We start each random walk
from the empty kernel and repeatedly apply a random number of grammatical transformations. The
number of such steps is sampled from a geometric distribution with termination probability 13 . We
find that 15 random walks works well. To constrain the number of candidates, we discard the models
with the lowest EI values at the end of each round, keeping |C| no larger than 600.
6
Table 1: Root mean square error for model-evidence regression experiment.
5
Dataset
Train %
Mean
k-NN (SP)
k-NN (d?H )
CONCRETE
20
40
60
80
0.109 (0.000)
0.107 (0.000)
0.107 (0.000)
0.106 (0.000)
0.200 (0.020)
0.260 (0.025)
0.266 (0.007)
0.339 (0.015)
0.233 (0.008)
0.221 (0.007)
0.215 (0.005)
0.200 (0.003)
0.107 (0.001)
0.102 (0.001)
0.097 (0.001)
0.093 (0.002)
HOUSING
20
40
60
80
0.210 (0.001)
0.207 (0.001)
0.206 (0.000)
0.206 (0.000)
0.226 (0.002)
0.235 (0.004)
0.235 (0.004)
0.257 (0.004)
0.347 (0.004)
0.348 (0.004)
0.348 (0.004)
0.344 (0.004)
0.175 (0.002)
0.140 (0.002)
0.123 (0.002)
0.114 (0.002)
MAUNA LOA
20
40
60
80
0.543 (0.002)
0.537 (0.001)
0.535 (0.001)
0.534 (0.001)
0.736 (0.051)
0.878 (0.062)
1.051 (0.058)
1.207 (0.048)
0.685 (0.010)
0.667 (0.005)
0.686 (0.010)
0.707 (0.005)
0.513 (0.003)
0.499 (0.003)
0.487 (0.004)
0.474 (0.004)
GP
(d?H )
Experiments
Here we evaluate our proposed algorithm. We split our evaluation into two parts: first, we show that
our GP model for predicting a model?s evidence is suitable; we then demonstrate that our model search
method quickly finds a good model for a range of regression datasets. The datasets we consider are
publicly available4 and were used in previous related work [1, 3]. AIRLINE, MAUNA LOA, METHANE,
and SOLAR are 1d time series, and CONCRETE and HOUSING have, respectively, 8 and 13 dimensions.
To facilitate comparison of evidence across datasets, we report log evidence divided by dataset size,
redefining
g(M; D) = log(p(y | X, M))/|D|.
(9)
We use the aforementioned base kernels {SE, RQ, LIN, PER} when the dataset is one-dimensional.
For multi-dimensional datasets, we consider the set {SEi } ? {RQi }, where the subscript indicates that
the kernel is applied only to the ith dimension. This setup is the same as in [1].
5.1
Predicting a model?s evidence
We first demonstrate that our proposed regression model in model space (i.e., the GP on g : M ? R) is
sound. We set up a simple prediction task where we predict model evidence on a set of models given
training data. We construct a dataset Dg (4) of 1 000 models as follows. We initialize a set M with the
set of base kernels, which varies for each dataset (see above). Then, we select one model uniformly
at random from M and add its neighbors in the model grammar to M. We repeat this procedure until
|M| = 1 000 and computed g(M; D) for the entire set generated. We train several baselines on a
subset of Dg and test their ability to predict the evidence of the remaining models, as measured by
the root mean squared error (RMSE). To achieve reliable results we repeat this experiment ten times.
We considered a subset of the datasets (including both high-dimensional problems), because training
1 000 models demands considerable time. We compare with several alternatives:
1. Mean prediction. Predicts the mean evidence on the training models.
2. k-nearest neighbors. We perform k-NN regression with two distances: shortest-path
distance in the directed model graph described in ?4.5 (SP), and the expected squared
Hellinger distance (7). Inverse distance was used as weights.
We select k for both k-NN algorithms through cross-validation, trying all values of k from 1 to 10.
We show the average RMSE along with standard error in Table 1. The GP with our Hellinger distance
model covariance universally achieves the lowest error. Both k-NN methods are outperformed by the
simple mean prediction. We note that in these experiments, many models perform similarly in terms
of evidence (usually, this is because many models are ?bad? in the same way, e.g., explaining the
dataset away entirely as independent noise). We note, however, that the GP model is able to exploit
correlations in deviations from the mean, for example in ?good pockets? of model space, to achieve
4
https://archive.ics.uci.edu/ml/datasets.html
7
AIRLINE
METHANE
HOUSING
?0.6
?0.2
0.5
?0.8
0
CKS
BOMS
0
20
40
?1
?0.3
?1.2
?0.4
0
iteration
20
40
?1.4
0
iteration
SOLAR
20
40
iteration
MAUNA LOA
?0.2
CONCRETE
?0.8
2.5
?1
?0.3
2
?1.2
?0.4
1.5
0
20
iteration
40
0
20
iteration
40
?1.4
0
20
40
iteration
Figure 2: A plot of the best model evidence found (normalized by |D|, (9)) as a function of the
number of models evaluated, g(M? ; D), for six of the datasets considered (identical vertical axis
labels omitted for greater horizontal resolution).
better performance. We also note that both the k-NN and GP models have decreasing error with the
number of training models, suggesting our novel model distance is also useful in itself.
5.2
Model search
We also evaluate our method?s ability to quickly find a suitable model to explain a given dataset. We
compare our approach with the greedy compositional kernel search (CKS) of [1]. Both algorithms
used the same kernel grammar (?4.1), hyperparameter priors (?4.2), and evidence approximation
(?4.3, (5)). We used L - BFGS to optimize model hyperparameters, using multiple restarts to avoid bad
local maxima; each restart begins from a sample from p(? | M).
For BOMS, we always began our search evaluating SE first. The active set of models C (?4.5)
was initialized with all models that are at most two edges distant from the base kernels. To avoid
unnecessary re-training over g, we optimized the hyperparameters of ?g and Kg every 10 iterations.
This also allows us to perform rank-one updates for fast inference during the intervening iterations.
Results are depicted in Figure 2 for a budget of 50 evaluations of the model evidence. In four of
the six datasets we substantially outperform CKS. Note the vertical axis is in the log domain. The
overhead for computing the kernel Kg and performing the inference about g was approximately 10%
of the total running time. On MAUNA LOA our method is competitive since we find a model with
similar quality, but earlier. The results for METHANE, on the other hand, indicate that our search
initially focused on a suboptimal region of the graph, but we eventually do catch up.
6
Conclusion
We introduced a novel automated search for an appropriate kernel to explain a given dataset. Our
mechanism explores a space of infinite candidate kernels and quickly and effectively selects a
promising model. Focusing on the case where the models represent structural assumptions in GPs, we
introduced a novel ?kernel kernel? to capture the similarity in prior explanations that two models
ascribe to a given dataset. We have empirically demonstrated that our choice of modeling the evidence
(or marginal likelihood) with a GP in model space is capable of predicting the evidence value of
unseen models with enough fidelity to effectively explore model space via Bayesian optimization.
8
Acknowledgments
This material is based upon work supported by the National Science Foundation (NSF) under award
number IIA?1355406. Additionally, GM acknowledges support from the Brazilian Federal Agency
for Support and Evaluation of Graduate Education (CAPES).
References
[1] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure Discovery in
Nonparametric Regression through Compositional Kernel Search. In International Conference on Machine
Learning (ICML), 2013.
[2] R. Grosse, R. Salakhutdinov, W. Freeman, and J. Tenenbaum. Exploiting compositionality to explore a
large space of model structures. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012.
[3] F. R. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Conference on
Neural Information Processing Systems (NIPS), 2008.
[4] M. Gonen and E. Alpaydin. Multiple kernel learning algorithms. Journal of Machine Learning Research,
12:2211?2268, 2011.
[5] M. L?zaro-Gredilla, J. Q. Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse Spectrum Gaussian
Process Regression. Journal of Machine Learning Research, 11:1865?1881, 2010.
[6] A. G. Wilson and R. P. Adams. Gaussian Process Kernels for Pattern Discovery and Extrapolation. In
International Conference on Machine Learning (ICML), 2013.
[7] A. Wilson, E. Gilboa, J. P. Cunningham, and A. Nehorai. Fast kernel learning for multidimensional pattern
extrapolation. In Conference on Neural Information Processing Systems (NIPS), 2014.
[8] A. G. Wilson, D. A. Knowles, and Z. Ghahramani. Gaussian process regression networks. In International
Conference on Machine Learning (ICML), 2012.
[9] G. E. Hinton and R. R. Salakhutdinov. Using Deep Belief Nets to Learn Covariance Kernels for Gaussian
Processes. In Conference on Neural Information Processing Systems (NIPS). 2008.
[10] A. C. Damianou and N. D. Lawrence. Deep Gaussian Processes. In International Conference on Artificial
Intelligence and Statistics (AISTATS), 2013.
[11] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K?gl. Algorithms for hyper-parameter optimization. In
Conference on Neural Information Processing Systems (NIPS). 2011.
[12] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms.
In Conference on Neural Information Processing Systems, 2012.
[13] J. Gardner, G. Malkomes, R. Garnett, K. Q. Weinberger, D. Barbour, and J. P. Cunningham. Bayesian
active model selection with an application to automated audiometry. In Conference on Neural Information
Processing Systems (NIPS). 2015.
[14] E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on Bayesian optimization of expensive cost
functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599, 2010.
[15] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[16] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions.
Journal of Global optimization, 13(4):455?492, 1998.
[17] D. J. C. MacKay. Introduction to Gaussian processes. In C. M. Bishop, editor, Neural Networks and
Machine Learning, pages 133?165. Springer, Berlin, 1998.
[18] A. E. Raftery. Approximate Bayes Factors and Accounting for Model Uncertainty in Generalised Linear
Models. Biometrika, 83(2):251?266, 1996.
[19] J. Kuha. AIC and BIC: Comparisons of Assumptions and Performance. Sociological Methods and
Research, 33(2):188?229, 2004.
[20] G. Schwarz. Estimating the Dimension of a Model. Annals of Statistics, 6(2):461?464, 1978.
[21] K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
9
| 6466 |@word exploitation:3 briefly:2 stronger:1 termination:1 closure:1 seek:2 covariance:14 accounting:1 automl:1 series:1 selecting:3 sobol:1 past:1 freitas:1 must:1 concatenate:1 numerical:1 distant:1 additive:2 treating:1 plot:1 update:2 stationary:1 generative:1 fewer:1 greedy:4 selected:1 intelligence:2 beginning:1 ith:1 location:3 traverse:2 along:2 fitting:1 overhead:1 hellinger:9 snoek:2 expected:7 growing:1 multi:1 salakhutdinov:2 freeman:1 decreasing:1 automatically:3 encouraging:1 overwhelming:1 considering:2 increasing:1 spain:1 begin:3 underlying:1 provided:1 estimating:1 lowest:2 what:2 kg:10 substantially:1 developed:1 finding:3 transformation:1 mitigate:1 every:3 multidimensional:1 tackle:1 biometrika:1 unit:1 appear:1 louis:2 generalised:1 positive:1 understood:2 before:1 engineering:1 local:1 despite:1 subscript:1 path:1 approximately:2 black:6 might:2 conversely:1 challenging:1 range:1 adoption:1 graduate:1 directed:1 acknowledgment:1 zaro:1 practical:1 practice:1 definite:1 procedure:6 area:1 significantly:1 wustl:1 induce:1 regular:1 suggest:1 cannot:1 selection:12 context:2 applying:1 impossible:1 optimize:3 demonstrated:1 williams:1 focused:3 resolution:1 welch:1 simplicity:2 schonlau:1 m2:1 insight:1 rule:2 deceptively:1 searching:4 variation:1 laplace:3 limiting:1 updated:1 construction:2 suppose:4 gm:1 user:1 exact:1 annals:1 gps:4 us:1 designing:1 trend:1 standardize:1 expensive:6 predicts:1 observed:7 preprint:1 capture:2 thousand:1 region:1 connected:1 trade:1 highest:1 alpaydin:1 rq:6 agency:1 complexity:2 ideally:1 ultimately:1 nehorai:1 depend:1 solving:1 predictive:2 upon:1 completely:2 easily:1 chip:1 represented:1 train:3 fast:2 describe:1 monte:1 query:1 artificial:2 formation:1 hyper:1 heuristic:1 larger:2 solve:1 grammar:10 ability:2 statistic:2 unseen:2 gp:20 transform:1 itself:1 housing:3 advantage:1 sequence:2 net:1 propose:1 lowdimensional:1 uci:1 combining:2 achieve:2 intervening:1 figueiras:1 exploiting:2 luizgustavo:1 empty:1 produce:1 generating:1 adam:2 depending:1 develop:2 derive:1 fixing:1 measured:2 nearest:1 minor:1 progress:1 strong:1 predicted:1 judge:1 indicate:1 larochelle:1 differ:1 direction:1 centered:2 human:2 exploration:3 viewing:1 material:1 education:1 explains:4 require:3 investigation:1 summation:1 malkomes:2 exploring:3 around:1 duvenaud:5 considered:6 normal:1 exp:2 ic:1 lawrence:1 predict:3 mo:1 claim:1 automate:1 m0:14 achieves:1 adopt:3 omitted:1 estimation:1 integrates:1 outperformed:1 label:2 sei:1 schwarz:1 grouped:1 create:4 establishes:1 tool:1 weighted:1 federal:1 boms:3 cora:1 mit:2 gaussian:20 always:1 rather:1 avoid:2 wilson:3 encode:2 focus:5 improvement:4 properly:1 rank:1 likelihood:3 indicates:2 methane:3 contrast:1 greedily:1 baseline:2 inference:3 dependent:1 nn:6 cubically:3 typically:4 entire:4 explanatory:1 initially:1 cunningham:2 relegated:1 quasi:1 selects:1 arg:2 fidelity:3 among:1 flexible:1 overall:1 priori:1 aforementioned:1 html:1 art:2 initialize:1 mutual:1 marginal:3 mackay:1 construct:7 once:1 washington:1 sampling:1 zz:1 identical:1 represents:1 broad:1 look:1 icml:3 placing:1 jones:1 discrepancy:1 others:1 report:1 roman:1 few:1 modern:1 dg:8 national:1 murphy:1 replaced:3 stationarity:1 highly:1 custom:1 evaluation:3 mixture:1 light:1 devoted:1 predefined:1 integral:1 capable:2 encourage:2 edge:1 respective:1 traversing:1 tree:1 divide:2 taylor:1 continuing:1 walk:3 rqi:1 initialized:1 re:1 instance:2 modeling:5 earlier:1 cost:2 deviation:2 subset:3 hundred:1 examining:1 varies:1 periodic:4 accomplish:1 st:2 explores:1 international:4 probabilistic:9 systematic:1 off:1 rewarding:1 together:1 quickly:4 concrete:3 squared:7 choose:1 creating:4 expert:1 account:2 potential:5 expended:1 diversity:1 suggesting:1 bfgs:1 bergstra:2 lloyd:1 de:1 explicitly:1 depends:1 multiplicative:1 root:3 performed:1 view:1 candela:1 extrapolation:2 start:1 competitive:1 bayes:1 complicated:1 solar:2 rmse:2 subexpression:2 square:1 publicly:1 variance:1 characteristic:1 maximized:2 bayesian:21 critically:1 carlo:1 expertise:1 explain:9 metalearning:1 damianou:1 inexpensive:1 against:1 acquisition:7 naturally:1 associated:6 mi:3 rational:2 sampled:1 dataset:30 adjusting:1 begun:1 realm:1 anytime:1 knowledge:1 pocket:1 amplitude:1 sophisticated:2 brochu:1 cks:3 appears:1 focusing:1 higher:1 supervised:2 restarts:1 evaluated:3 box:4 until:2 correlation:2 hand:1 horizontal:1 expressive:2 ei:3 overlapping:1 mode:1 quality:1 ascribe:1 facilitate:1 normalized:1 former:1 illustrated:1 round:5 assistance:1 during:2 essence:1 criterion:1 trying:2 demonstrate:3 performs:1 consideration:2 novel:5 fi:1 began:1 common:2 empirically:2 discussed:1 slight:1 interpretation:1 approximates:1 refer:1 composition:1 automatic:1 grid:1 similarly:1 iia:1 similarity:6 etc:1 base:10 add:3 posterior:2 multivariate:2 recent:1 perspective:2 optimizing:2 discard:1 success:3 arbitrarily:1 continue:1 yi:1 seen:2 greater:1 bochner:1 maximize:1 determine:1 period:1 signal:1 shortest:1 multiple:4 sound:1 offer:3 cross:1 lin:2 bach:1 divided:1 equally:1 award:1 prediction:5 regression:10 basic:2 metric:1 expectation:2 arxiv:2 iteration:8 kernel:75 sometimes:1 represent:1 proposal:1 addition:1 want:1 separately:1 source:1 concluded:1 appropriately:1 airline:2 archive:1 induced:2 virtually:1 member:2 call:1 structural:2 split:1 enough:1 bengio:1 automated:7 variety:1 fit:1 bic:3 suboptimal:1 idea:1 enumerating:1 det:1 six:2 effort:2 render:1 compositional:4 repeatedly:1 deep:4 useful:2 se:12 nonparametric:2 ten:1 tenenbaum:2 induces:2 svms:1 category:1 http:1 specifies:1 outperform:1 nsf:1 tutorial:1 per:11 diverse:1 discrete:2 hyperparameter:9 audiometry:1 group:1 four:2 enormous:1 penalizing:1 bardenet:1 vast:1 graph:4 sum:1 inverse:2 parameterized:1 uncertainty:2 family:5 throughout:1 reader:1 reasonable:1 brazilian:1 knowles:1 entirely:1 barbour:1 aic:1 quadratic:3 nontrivial:2 constrain:1 encodes:1 aspect:1 attempting:2 performing:1 department:1 structured:2 according:3 gredilla:1 across:2 smaller:1 lunch:1 making:2 restricted:2 previously:1 discus:2 eventually:2 mechanism:4 end:1 adopted:1 available:2 competitively:1 apply:2 observe:1 hierarchical:2 away:1 appropriate:6 spectral:1 vidal:1 alternative:1 weinberger:1 original:1 standardized:1 remaining:1 include:1 running:1 maintaining:1 cape:1 exploit:1 giving:1 plausibly:1 ghahramani:2 especially:5 approximating:2 classical:3 seeking:1 objective:2 already:1 quantity:1 parametric:2 costly:1 dependence:1 gradient:2 distance:16 berlin:1 restart:1 considers:2 reason:6 spanning:1 besides:1 length:5 demonstration:1 setup:1 unfortunately:2 potentially:3 design:3 policy:1 unknown:1 contributed:1 perform:5 vertical:2 observation:10 datasets:15 extended:1 hinton:1 arbitrary:5 intensity:1 compositionality:1 introduced:3 required:1 specified:3 optimized:3 redefining:1 learned:1 barcelona:1 nip:6 able:1 suggested:1 below:4 usually:3 pattern:2 gonen:1 summarize:1 including:3 max:2 explanation:8 belief:4 reliable:1 critical:5 suitable:2 predicting:3 misleading:1 imply:1 gardner:2 axis:2 raftery:1 acknowledges:1 catch:1 prior:31 literature:1 review:1 geometric:1 kf:3 multiplication:1 discovery:2 fully:2 expect:1 sociological:1 suggestion:2 fictitious:1 querying:1 validation:1 foundation:1 offered:2 sufficient:2 consistent:3 principle:1 viewpoint:1 leaning:1 editor:1 balancing:1 repeat:3 loa:4 free:2 keeping:1 supported:1 rasmussen:2 gilboa:1 gl:1 mauna:4 fall:1 explaining:3 face:1 neighbor:3 sparse:1 benefit:1 grammatical:1 dimension:6 world:1 evaluating:1 rich:1 author:1 collection:2 commonly:1 preprocessing:1 universally:1 reinforcement:1 far:2 approximate:6 countably:1 ml:1 global:2 active:7 uai:1 unnecessary:1 xi:3 alternatively:1 spectrum:1 search:21 iterative:1 latent:7 decade:1 table:2 additionally:1 promising:2 nature:3 learn:1 model3:1 expansion:1 complex:7 constructing:1 garnett:3 domain:4 sp:2 aistats:1 noise:3 hyperparameters:17 representative:1 cubic:1 grosse:3 comprises:1 wish:2 exponential:3 candidate:9 theorem:1 bad:2 specific:4 bishop:1 showing:1 offset:1 divergent:1 evidence:44 incorporating:1 intractable:2 gustavo:1 effectively:2 conditioned:3 budget:2 demand:1 subtract:1 suited:1 depicted:1 explore:3 likely:1 conveniently:1 d2h:2 springer:1 dh:2 conditional:1 goal:2 considerable:3 change:1 infinite:10 determined:1 uniformly:2 reducing:1 averaging:1 called:2 total:1 select:9 thematically:2 support:2 latter:1 incorporate:1 evaluate:5 |
6,043 | 6,467 | Generalization of ERM in Stochastic Convex
Optimization:
The Dimension Strikes Back?
Vitaly Feldman
IBM Research ? Almaden
Abstract
In stochastic convex optimization the goal is to minimize a convex function
.
F (x) = Ef ?D [f (x)] over a convex set K ? Rd where D is some unknown
distribution and each f (?) in the support of D is convex over K. The optimization is commonly based on i.i.d. samples f 1 , f 2 , . . . , f n from D. A standard
approach to P
such problems is empirical risk minimization (ERM) that optimizes
.
FS (x) = n1 i?n f i (x). Here we consider the question of how many samples
are necessary for ERM to succeed and the closely related question of uniform
convergence of FS to F over K. We demonstrate that in the standard `p /`q setting
of Lipschitz-bounded functions over a K of bounded radius, ERM requires sample
size that scales linearly with the dimension d. This nearly matches standard upper
bounds and improves on ?(log d) dependence proved for `2 /`2 setting in [18]. In
stark contrast, these problems can be solved using dimension-independent number
of samples for `2 /`2 setting and log d dependence for `1 /`? setting using other
approaches.
We further show that our lower bound applies even if the functions in the support
of D are smooth and efficiently computable and even if an `1 regularization term is
added. Finally, we demonstrate that for a more general class of bounded-range (but
not Lipschitz-bounded) stochastic convex programs an infinite gap appears already
in dimension 2.
1
Introduction
Numerous central problems in machine learning, statistics and operations research are special cases of
stochastic optimization from i.i.d. data samples. In this problem the goal is to optimize the value of the
.
expected objective function F (x) = Ef ?D [f (x)] over some set K given i.i.d. samples f 1 , f 2 , . . . , f n
of f . For example, in supervised learning the set K consists of hypothesis functions from Z to Y
and each sample is an example described by a pair (z, y) ? (Z, Y ). For some fixed loss function
L : Y ? Y ? R, an example (z, y) defines a function from K to R given by f(z,y) (h) = L(h(z), y).
The goal is to find a hypothesis h that (approximately) minimizes the expected loss relative to some
distribution P over examples: E(z,y)?P [L(h(z), y)] = E(z,y)?P [f(z,y) (h)].
Here we are interested in stochastic convex optimization (SCO) problems in which K is some convex
subset of Rd and each function in the support of D is convex over K. The importance of this
setting stems from the fact that such problems can be solved efficiently via a large variety of known
techniques. Therefore in many applications even if the original optimization problem is not convex, it
is replaced by a convex relaxation.
A classic and widely-used approach to solving stochastic optimization problems is empirical risk
minimization (ERM) also referred to as stochastic average approximation (SAA) in the optimization
?
See [9] for the full version of this work.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
literature. In this approach,
given a set of samples S = (f 1 , f 2 , . . . , f n ) the empirical objective
. 1P
function: FS (x) = n i?n f i (x) is optimized (sometimes with an additional regularization term
such as ?kxk2 for some ? > 0). The question we address here is the number of samples required
for this approach to work distribution-independently. More specifically, for some fixed convex
body K and fixed set of convex functions F over K, what is the smallest number of samples
n such that for every probability distribution D supported on F, any algorithm that minimizes
FS given n i.i.d. samples from D will produce an -optimal solution x
? to the problem (namely,
F (?
x) ? minx?K F (x) + ) with probability at least 1 ? ?? We will refer to this number as the sample
complexity of ERM for -optimizing F over K (we will fix ? = 1/2 for now).
The sample complexity of ERM for -optimizing F over K is lower bounded by the sample complexity
of -optimizing F over K, that is the number of samples that is necessary to find an -optimal
solution for any algorithm. On the other hand, it is upper bounded by the number of samples that
ensures uniform convergence of FS to F . Namely, if with probability ? 1 ? ?, for all x ? K,
|FS (x) ? F (x)| ? /2 then, clearly, any algorithm based on ERM will succeed. As a result, ERM
and uniform convergence are the primary tool for analysis of the sample complexity of learning
problems and are the key subject of study in statistical learning theory. Fundamental results in VC
theory imply that in some settings, such as binary classification and least-squares regression, uniform
convergence is also a necessary condition for learnability (e.g. [23, 17]) and therefore the three
measures of sample complexity mentioned above nearly coincide.
In the context of stochastic convex optimization the study of sample complexity of ERM and
uniform convergence was initiated in a groundbreaking work of Shalev-Shwartz, Shamir, Srebro and
Sridharan [18]. They demonstrated that the relationships between these notions of sample complexity
are substantially more delicate even in the most well-studied settings of SCO. Specifically, let K
be a unit `2 ball and F be the set of all convex sub-differentiable functions with Lipschitz constant
relative to `2 bounded by 1 or, equivalently, k?f (x)k2 ? 1 for all x ? K. Then, known algorithm
?
for SCO imply that sample complexity of this problem is O(1/2 ) and often expressed as 1/ n
2
rate of convergence (e.g. [14, 17]). On the other hand, Shalev-Shwartz et al.[18] show that the
sample complexity of ERM for solving this problem with = 1/2 is ?(log d). The only known
2
?
upper bound for sample complexity of ERM is O(d/
) and relies only on the uniform convergence
of Lipschitz-bounded functions [21, 18].
As can seen from this discussion, the work of Shalev-Shwartz et al.[18] still leaves a major gap
between known bounds on sample complexity of ERM (and also uniform convergence) for this basic
Lipschitz-bounded `2 /`2 setup. Another natural question is whether the gap is present in the popular
`1 /`? setup. In this setup K is a unit `1 ball (or in some cases a simplex) and k?f (x)k? ? 1 for all
x ? K. The sample complexity of SCO in this setup is ?(log d/2 ) (e.g. [14, 17]) and therefore, even
an appropriately modified lower bound in [18], does not imply any gap. More generally, the choice
of norm can have a major impact on the relationship between these sample complexities and hence
needs to be treated carefully. For example, for (the reversed) `? /`1 setting the sample complexity
of the problem is ?(d/2 ) (e.g. [10]) and nearly coincides with the number of samples sufficient for
uniform convergence.
1.1
Overview of Results
In this work we substantially strengthen the lower bound in [18] proving that a linear dependence on
the dimension d is necessary for ERM (and, consequently, uniform convergence). We then extend
the lower bound to all `p /`q setups and examine several related questions. Finally, we examine a
more general setting of bounded-range SCO (that is |f (x)| ? 1 for all x ? K). While the sample
2
?
complexity of this setting is still low (for example O(1/
) when K is an `2 ball) and efficient
algorithms are known, we show that ERM might require an infinite number of samples already for
d = 2.
Our work implies that in SCO, even optimization algorithms that exactly minimize the empirical
objective function can produce solutions with generalization error that is much larger than the generalization error of solutions obtained via some standard approaches. Another, somewhat counterintuitive,
conclusion from our lower bounds is that, from the point of view of generalization of ERM and
uniform convergence, convexity does not reduce the sample complexity in the worst case.
2
The dependence on d is not stated explicitly but follows immediately from their analysis.
2
Basic construction: Our basic construction is fairly simple and its analysis is inspired by the
technique in [18]. It is based on functions of the form max{1/2, maxv?V hv, xi}. Note that the
maximum operator preserves both convexity and Lipschitz bound (relative to any norm). See Figure
1 for an illustration of such function for d = 2.
Figure 1: Basic construction for d = 2.
The distribution over the sets V that define such functions is uniform over all subsets of some set
of vectors W of size 2d/6 such that for any two district u, v ? W , hu, vi ? 1/2. Equivalently, each
element of W is included in V with probability 1/2 independently of other elements in W . This
implies that if the number of samples is less than d/6 then, with probability > 1/2, at least one of
the vectors in W (say w) will not be observed in any of the samples. This implies that FS can be
minimized while maximizing hw, xi (the maximum over the unit `2 ball is w). Note that a function
randomly chosen from our distribution includes the term hw, xi in the maximum operator with
probability 1/2. Therefore the value of the expected function F at w is 3/4 whereas the minimum of
F is 1/2. In particular, there exists an ERM algorithm with generalization error of at least 1/4. The
details of the construction appear in Sec. 3.1 and Thm. 3.3 gives the formal statement of the lower
bound. We also show that, by scaling the construction appropriately, we can obtain the same lower
bound for any `p /`q setup with 1/p + 1/q = 1 (see Thm. 3.4).
Low complexity construction: The basic construction relies on functions that require 2d/6 bits to
describe and exponential time to compute. Most application of SCO use efficiently computable
functions and therefore it is natural to ask whether the lower bound still holds for such functions.
To answer this question we describe a construction based on a set of functions where each function
requires just log d bits to describe (there are at most d/2 functions in the support of the distribution)
and each function can be computed in O(d) time. To achieve this we will use W that consists of
(scaled) codewords of an asymptotically good and efficiently computable binary error-correcting
code [12, 22]. The functions are defined in a similar way but the additional structure of the code
allows to use at most d/2 subsets of W to define the functions. Further details of the construction
appear in Section 4.
Smoothness: The use of maximum operator results in functions that are highly non-smooth (that
is, their gradient is not Lipschitz-bounded) whereas the construction in [18] uses smooth functions.
Smoothness plays a crucial role in many algorithms for convex optimization (see [5] for examples).
It reduces the sample complexity of SCO in `2 /`2 setup to O(1/) when the smoothness parameter
is a constant (e.g. [14, 17]). Therefore it is natural to ask whether our strong lower bound holds
for smooth functions as well. We describe a modification of our construction that proves a similar
lower bound in the smooth case (with generalization error of 1/128). The main idea is to replace
each linear function hv, xi with some smooth function ?(hv, xi) guaranteing that for different vectors
v 1 , v 2 ? W and every x ? K, only one of ?(hv 1 , xi) and ?(hv 2 , xi) can be non-zero. This allows to
easily control the smoothness of maxv?V ?(hv, xi). See Figure 2 for an illustration of a function on
which the construction is based (for d = 2). The details of this construction appear in Sec. 3.2 and
the formal statement in Thm. 3.6.
3
Figure 2: Construction using 1-smooth functions for d = 2.
`1 -regularization: Another important contribution in [18] is the demonstration of the important role
that strong convexity plays for generalization in SCO: Minimization of FS (x) + ?R(x) ensures that
ERM will have low generalization error whenever R(x) is strongly convex (for a sufficiently large
?). This result is based on the proof that ERM of a strongly convex Lipschitz function is uniform
replace-one stable and the connection between such stability and generalization showed in [4] (see
also [19] for a detailed treatment of the relationship between generalization and stability). It is
natural to ask whether other approaches to regularization will ensure generalization. We demonstrate
that for the commonly used `1 regularization the answer is negative. We prove this using a simple
modification of our lower bound construction: We shift the functions to the positive orthant where
the regularization terms ?kxk1 is just a linear function. We then subtract this linear function from
each function in our construction, thereby balancing the regularization (while maintaining convexity
and Lipschitz-boundedness). The details of this construction appear in Sec. 3.3 (see Thm. 3.7).
Dependence on accuracy: For simplicity and convenience we have ignored the dependence on the
accuracy , Lipschitz bound L and radius R of K in our lower bounds. It is easy to see, that this more
general setting can be reduced to the case we consider here (Lipschitz bound and radius are equal to
1) with accuracy parameter 0 = /(LR). We generalize our lower bound to this setting and prove
that ?(d/02 ) samples are necessary for uniform convergence and ?(d/0 ) samples are necessary
for generalization of ERM. Note that the upper bound on the sample complexity of these settings is
02
?
O(d/
) and therefore the dependence on 0 in our lower bound does not match the upper bound for
ERM. Resolving this gap or even proving any ?(d/0 + 1/02 ) lower bound is an interesting open
problem. Additional details can be found in the full version.
Bounded-range SCO: Finally, we consider a more general class of bounded-range convex functions
Note that the Lipschitz bound of 1 and the bound of 1 on the radius of K imply a bound of 1 on the
range (up to a constant shift which does not affect the optimization problem). While this setting is not
as well-studied, efficient algorithms for it are known. For example, the online algorithm in a recent
work of Rakhlin and Sridharan [16] together with standard online-to-batch conversion arguments
2
?
[6], imply that the sample complexity of this problem is O(1/
) for any K that is an `2 ball (of any
radius). For general convex bodies K, the problems can be solved via random walk-based approaches
[3, 10] or an adaptation of the center-of-gravity method given in [10]. Here we show that for this
setting ERM might completely fail already for K being the unit 2-dimensional ball. The construction
is based on ideas similar to those we used in the smooth case and is formally described in in the full
version.
2
Preliminaries
.
For an integer n ? 1 let [n] = {1, . . . , n}. Random variables are denoted by bold letters, e.g., f .
Given p ? [1, ?] we denote the ball of radius R > 0 in `p norm by Bpd (R), and the unit ball by Bpd .
For a convex body (i.e., compact convex set with nonempty interior) K ? Rd , we consider problems
of the form
.
.
min(FD ) = min FD (x) = E [f (x)] ,
K
x?K
f ?D
4
where f is a random variable defined over some set of convex, sub-differentiable functions F on K
and distributed according to some unknown probability distribution D. We denote F ? = minK (FD ).
For an approximation parameter > 0 the goal is to find x ? K such that FD (x) ? F ? + and we
call any such
x an -optimal solution. For an n-tuple of functions S = (f 1 , . . . , f n ) we denote by
. 1P
FS = n i?[n] f i .
We say that a point x
? is an empirical risk minimum for an n-tuple S of functions over K, if
FS (?
x) = minK (FS ). In some cases there are many points that minimize FS and in this case we refer
to a specific algorithm that selects one of the minimums of FS as an empirical risk minimizer. To
make this explicit we refer to the output of such a minimizer by x
?(S) .
Given x ? K, and a convex function f we denote by ?f (x) ? ?f (x) an arbitrary selection of
a subgradient. Let us make a brief reminder of some important classes of convex functions. Let
.
p ? [1, ?] and q = p? = 1/(1 ? 1/p). We say that a subdifferentiable convex function f : K ? R
is in the class
? F(K, B) of B-bounded-range functions if for all x ? K, |f (x)| ? B.
? Fp0 (K, L) of L-Lipschitz continuous functions w.r.t. `p , if for all x, y ? K, |f (x) ? f (y)| ?
Lkx ? ykp ;
? Fp1 (K, ?) of functions with ?-Lipschitz continuous gradient w.r.t. `p , if for all x, y ? K,
k?f (x) ? ?f (y)kq ? ?kx ? ykp .
We will omit p from the notation when p = 2. Omitted proofs can be found in the full version [9].
3
Lower Bounds for Lipschitz-Bounded SCO
In this section we present our main lower bounds for SCO of Lipschitz-bounded convex functions.
For comparison purposes we start by formally stating some known bounds on sample complexity of
solving such problems. The following uniform convergence bounds can be easily derived from the
standard covering number argument (e.g. [21, 18])
Theorem 3.1. For p ? [1, ?], let K ? Bpd (R) and let D be any distribution supported on functions
L-Lipschitz
on K relative
to `p (not necessarily convex). Then, for every , ? > 0 and n ? n1 =
O
d?(LR)2 ?log(dLR/(?))
2
Pr [?x ? K, |FD (x) ? FS (x)| ? ] ? ?.
S?D n
The following upper bounds on sample complexity of Lipschitz-bounded SCO can be obtained from
several known algorithms [14, 18] (see [17] for a textbook exposition for p = 2).
Theorem 3.2. For p ? [1, 2], let K ? Bpd (R). Then, there is an algorithm Ap that given , ? > 0 and
n = np (d, R, L, , ?) i.i.d. samples from any distribution D supported on Fp0 (K, L), outputs an optimal solution to FD over K with probability ? 1 ? ?. For p ? (1, 2], np = O((LR/)2 ? log(1/?))
and for p = 1, np = O((LR/)2 ? log d ? log(1/?)).
Stronger results are known under additional assumptions on smoothness and/or strong convexity
(e.g. [14, 15, 20, 1]).
3.1
Non-smooth construction
We will start with a simpler lower bound for non-smooth functions. For simplicity, we will also
restrict R = L = 1. Lower bounds for the general setting can be easily obtained from this case by
scaling the domain and desired accuracy.
We will need a set of vectors W ? {?1, 1}d with the following property: for any distinct w1 , w2 ?
W , hw1 , w2 i ? d/2. The Chernoff bound together with a standard packing argument imply that
there exists a set W with this property of size ? ed/8 ? 2d/6 .
For any subset V of W we define a function
.
gV (x) = max{1/2, maxhw,
? xi},
w?V
5
(1)
?
.
where w
? = w/kwk = w/ d. See Figure 1 for an illustration. We first observe that gV is convex and
1-Lipschitz (relative to `2 ). This immediately follows from hw,
? xi being convex and 1-Lipschitz for
every w and gV being the maximum of convex and 1-Lipschitz functions.
.
Theorem 3.3. Let K = B2d and we define H2 = {gV | V ? W } for gV defined in eq. (1). Let D be
the uniform distribution over H2 . Then for n ? d/6 and every set of samples S there exists an ERM
x
?(S) such that
Prn [FD (?
x(S)) ? F ? ? 1/4] > 1/2.
S?D
Proof. We start by observing that the uniform distribution over H2 is equivalent to picking the
function gV where V is obtained by including every element of W with probability 1/2 randomly
and independently of all other elements. Further, by the properties of W , for every w ? W , and
V ? W , gV (w)
? = 1 if w ? V and gV (w)
? = 1/2 otherwise. For gV chosen randomly with respect
to D, we have that w ? V with probability exactly 1/2. This implies that FD (w)
? = 3/4.
Let S = (gV1 , . . . , gVn ) be the random samples. Observe that minKS
(FS ) = 1/2 and F ? =
?
minK (FD ) = 1/2 (the minimum is achieved at the origin 0). Now, if i?[n] Vi 6= W then let
S
.
x
?(S) = w
? for any w ? W \ i?[n] Vi . Otherwise x
?(S) is defined to be the origin ?0. Then by the
property of H2 mentioned above, we have that for all i, gVi (?
x(S)) = 1/2 and hence FS (?
x(S)) = 1/2.
This means that x
?(S) is a minimizer of FS .
S
Combining these statements, we get that, if i?[n] Vi 6= W then there exists an ERM x
?(S) such that
FS (?
x(S)) = minK (FS ) and FD (?
x(S)) ? F ? = 1/4. Therefore to prove the claim it suffices to show
that for n ? d/6 we have that
?
?
[
1
Prn ?
Vi 6= W ? > .
S?D
2
i?[n]
This easily follows from observing that for the uniform distribution over subsets of W , for every
w ? W,
?
?
[
Prn ?w ?
Vi ? = 1 ? 2?n
S?D
i?[n]
and this event is independent from the inclusion of other elements in
?
Prn ?
S?D
S
i?[n]
Vi . Therefore
?
[
Vi = W ? = 1 ? 2?n
i?[n]
|W |
?n
? e?2
?2d/6
? e?1 <
1
.
2
Other `p norms: We now observe that exactly the same approach can be used to extend this lower
bound to `p /`q setting. Specifically, for p ? [1, ?] and q = p? we define
1
hw, xi
.
gp,V (x) = max
, max 1/q .
2 w?V d
It is easy to see that for every V ? W , gq,V ? Fp0 (Bpd , 1). We can now use the same argument
as before with the appropriate normalization factor for points in Bpd . Namely, instead of w
? for
1/p
d
w ? W we consider the values of the minimized functions at w/d
? Bp . This gives the following
generalization of Thm. 3.3.
.
Theorem 3.4. For every p ? [1, ?] let K = Bpd and we define Hp = {gp,V | V ? W } and let D be
the uniform distribution over Hp . Then for n ? d/6 and every set of samples S there exists an ERM
x
?(S) such that
Prn [FD (?
x(S)) ? F ? ? 1/4] > 1/2.
S?D
6
3.2
Smoothness does not help
We now extend the lower bound to smooth functions. We will for simplicity restrict our attention to
`2 but analogous modifications can be made for other `p norms. The functions gV that we used in the
construction use two maximum operators each of which introduces non-smoothness. To deal with
maximum with 1/2 we simply replace the function max{1/2, hw,
? xi} with a quadratically smoothed
version (in the same way as hinge loss is sometimes replaced with modified Huber loss). To deal
with the maximum over all w ? V , we show that it is possible to ensure that individual components
do not ?interact". That is, at every point x, the value, gradient and Hessian of at most one component
function are non-zero (value, vector and matrix, respectively). This ensures that maximum becomes
addition and Lipschitz/smoothness constants can be upper-bounded easily.
Formally, we define
.
?(a) =
0
a2
if a ? 0
otherwise.
Now, for V ? W , we define
. X
hV (x) =
?(hw,
? xi ? 7/8).
(2)
w?V
See Figure 2 for an illustration. We first prove that hV is 1/4-Lipschitz and 1-smooth.
Lemma 3.5. For every V ? W and hV defined in eq. (2) we have hV ? F20 (B2d , 1/4) ? F21 (B2d , 1).
From here we can use the proof approach from Thm. 3.3 but with hV in place of gV .
.
Theorem 3.6. Let K = B2d and we define H = {hV | V ? W } for hV defined in eq. (2). Let D be
the uniform distribution over H. Then for n ? d/6 and every set of samples S there exists an ERM
x
?(S) such that
x(S)) ? F ? ? 1/128] > 1/2.
Prn [FD (?
S?D
3.3
`1 Regularization does not help
Next we show that
even with an additional `1 regularization term ?kxk for
? the lower bound holds ?
positive ? ? 1/ d. (Note that if ? > 1/ d then the resulting program is no longer 1-Lipschitz
relative to `2 . Any constant ? can be allowed for `1 /`? setup). To achieve this we shift the
construction to the positive orthant (that is x such that xi ? 0 for all i ? [d]). In this orthant the
subgradient of the regularization term is simply ??1 where ?1 is the all 1?s vector. We can add a linear
term to each function in our distribution that balances this term thereby reducing the analysis to
non-regularized case. More formally, we define the following family of functions. For V ? W ,
?
.
h?V (x) = hV (x ? ?1/ d) ? ?h?1, xi.
B2d (2),
Note that over
prove this formally.
h?V
?
(3)
(x) is L-Lipschitz for L ? 2(2 ? 7/8) + ? d ? 9/4. We now state and
?
.
Theorem 3.7. Let K = B2d (2) and for a given ? ? (0, 1/ d], we define H? = {h?V | V ? W } for
h?V defined in eq. (3). Let D be the uniform distribution over H? . Then for n ? d/6 and every set of
samples S there exists x
?(S) such that
? FS (?
x(S)) = minx?K (FS (x) + ?kxk1 );
? PrS?Dn [FD (?
x(S)) ? F ? ? 1/128] > 1/2.
4
Lower Bound for Low-Complexity Functions
We will now demonstrate that our lower bounds hold even if one restricts the attention to functions
that can be computed efficiently (in time polynomial in d). For this purpose we will rely on known
constructions of binary linear error-correcting codes. We describe the construction for non-smooth
`2 /`2 setting but analogous versions of other constructions can be obtained in the same way.
7
We start by briefly providing the necessary background about binary codes. For two vectors w1 , w2 ?
{?1}d let #6= (w1 , w2 ) denote the Hamming distance between the two vectors. We say that a
mapping G : {?1}k ? {?1}d is a [d, k, r, T ] binary error-correcting code if G has distance at least
2r + 1, G can be computed in time T and there exists an algorithm that for every w ? {?1}d such
that for some z ? {?1}k , #6= (w, G(z)) ? r finds such z in time T (note that such z is unique).
Given [d, k, r, T ] code G, for every j ? [k], we define a function
r
.
? xi ,
gj (x) = max 1 ? , max hw,
2d w?Wj
(4)
.
where Wj = {G(z) | z ? {?1}k , zj = 1}. As before, we note that gj is convex and 1-Lipschitz
(relative to `2 ).
We can now use any existing constructions of efficient binary error-correcting codes to obtain a lower
bound that uses only a small set of efficiently computable convex functions. Getting a lower bound
that has asymptotically optimal dependence on d requires that k = ?(d) and r = ?(d) (referred
to as being asymptotically good). The existence of efficiently computable and asymptotically good
binary error-correcting codes was first shown by Justesen [12]. More recent work of Spielman [22]
shows existence of asymptotically good codes that can be encoded and decoded in O(d) time. In
particular, for some constant ? > 0, there exists a [d, d/2, ? ? d, O(d)] binary error-correcting code.
As a corollary we obtain the following lower bound.
Corollary 4.1. Let G be an asymptotically-good [d, d/2, ? ? d, O(d)] error-correcting code for a
.
constant ? > 0. Let K = B2d and we define HG = {gj | j ? [d/2]} for gj defined in eq. (4). Let D
be the uniform distribution over HG . Then for every x ? K, gj (x) can be computed in time O(d).
n
Further, for n ? d/4 and every set of samples S ? HG
there exists an ERM x
?(S) such that
FD (?
x(S)) ? F ? ? ?/4.
5
Discussion
Our work points out to substantial limitations of the classic approach to understanding and analysis
of generalization in the context of general SCO. Further, it implies that in order to understand
how well solutions produced by an optimization algorithm generalize, it is necessary to examine
the optimization algorithm itself. This is a challenging task that we still have relatively few tools
to address. Yet such understanding is also crucial for developing theory to guide the design of
optimization algorithms that are used in machine learning applications.
One way to bypass our lower bounds is to use additional structural assumptions. For example, for
generalized linear regression problems uniform convergence gives nearly optimal bounds on sample
complexity [13]. One natural question is whether there exist more general classes of functions that
capture most of the practically relevant SCO problems and enjoy dimension-independent (or, scaling
as log d) uniform convergence bounds.
An alternative approach is to bypass uniform convergence (and possibly also ERM) altogether.
Among a large number of techniques that have been developed for ensuring generalization, the most
general ones are based on notions of stability [4, 19]. However, known analyses based on stability
often do not provide the strongest known generalization guarantees (e.g. high probability bounds
require very strong assumptions). Another issue is that we lack general algorithmic tools for ensuring
stability of the output. Therefore many open problems remain and significant progress is required to
obtain a more comprehensive understanding of this approach. Some encouraging new developments
in this area are the use of notions of stability derived from differential privacy [7, 8, 2] and the use of
techniques for analysis of convergence of convex optimization algorithms for proving stability [11].
Acknowledgements
I am grateful to Ken Clarkson, Sasha Rakhlin and Thomas Steinke for discussions and insightful
comments related to this work.
8
References
[1] F. R. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate
o(1/n). In NIPS, pages 773?781, 2013.
[2] R. Bassily, K. Nissim, A. D. Smith, T. Steinke, U. Stemmer, and J. Ullman. Algorithmic stability for
adaptive data analysis. In STOC, pages 1046?1059, 2016.
[3] A. Belloni, T. Liang, H. Narayanan, and A. Rakhlin. Escaping the local minima via simulated annealing:
Optimization of approximately convex functions. In COLT, pages 240?265, 2015.
[4] O. Bousquet and A. Elisseeff. Stability and generalization. JMLR, 2:499?526, 2002.
[5] S. Bubeck. Convex optimization: Algorithms and complexity. Foundations and Trends in Machine
Learning, 8(3-4):231?357, 2015.
[6] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms.
IEEE Transactions on Information Theory, 50(9):2050?2057, 2004.
[7] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth. Preserving statistical validity in
adaptive data analysis. CoRR, abs/1411.2664, 2014. Extended abstract in STOC 2015.
[8] C. Dwork, V. Feldman, M. Hardt, T. Pitassi, O. Reingold, and A. Roth. Generalization in adaptive data
analysis and holdout reuse. CoRR, abs/1506, 2015. Extended abstract in NIPS 2015.
[9] V. Feldman. Generalization of ERM in stochastic convex optimization: The dimension strikes back. CoRR,
abs/1608.04414, 2016. Extended abstract in NIPS 2016.
[10] V. Feldman, C. Guzman, and S. Vempala. Statistical query algorithms for mean vector estimation and
stochastic convex optimization. CoRR, abs/1512.09170, 2015. Extended abstract in SODA 2017.
[11] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent.
In ICML, pages 1225?1234, 2016.
[12] J. Justesen. Class of constructive asymptotically good algebraic codes. IEEE Trans. Inf. Theor., 18(5):652 ?
656, 1972.
[13] S. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin
bounds, and regularization. In NIPS, pages 793?800, 2008.
[14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM J. Optim., 19(4):1574?1609, 2009.
[15] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic
optimization. In ICML, 2012.
[16] A. Rakhlin and K. Sridharan. Sequential probability assignment with binary alphabets and large classes of
experts. CoRR, abs/1501.07340, 2015.
[17] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms.
Cambridge University Press, 2014.
[18] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT,
2009.
[19] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform convergence.
The Journal of Machine Learning Research, 11:2635?2670, 2010.
[20] O. Shamir and T. Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results
and optimal averaging schemes. In ICML, pages 71?79, 2013.
[21] A. Shapiro and A. Nemirovski. On complexity of stochastic programming problems. In V. Jeyakumar and
A. M. Rubinov, editors, Continuous Optimization: Current Trends and Applications 144. Springer, 2005.
[22] D. Spielman. Linear-time encodable and decodable error-correcting codes. IEEE Transactions on
Information Theory, 42(6):1723?1731, 1996.
[23] V. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998.
9
| 6467 |@word briefly:1 version:6 polynomial:1 norm:5 stronger:1 open:2 hu:1 elisseeff:1 thereby:2 boundedness:1 existing:1 current:1 optim:1 yet:1 gv:11 maxv:2 juditsky:1 leaf:1 smith:1 lr:4 district:1 simpler:1 zhang:1 dn:1 differential:1 consists:2 prove:5 interscience:1 privacy:1 huber:1 expected:3 examine:3 moulines:1 inspired:1 encouraging:1 becomes:1 spain:1 bounded:18 notation:1 what:1 minimizes:2 substantially:2 textbook:1 developed:1 guarantee:1 every:19 gravity:1 exactly:3 k2:1 scaled:1 control:1 unit:5 omit:1 appear:4 enjoy:1 positive:3 before:2 local:1 initiated:1 approximately:2 ap:1 might:2 studied:2 challenging:1 nemirovski:2 range:6 unique:1 subdifferentiable:1 area:1 empirical:6 get:1 convenience:1 interior:1 selection:1 operator:4 risk:5 context:2 optimize:1 equivalent:1 demonstrated:1 center:1 maximizing:1 roth:2 attention:2 independently:3 convex:41 simplicity:3 immediately:2 correcting:8 counterintuitive:1 classic:2 proving:3 notion:3 stability:11 analogous:2 shamir:5 construction:25 play:2 strengthen:1 programming:2 us:2 hypothesis:2 origin:2 element:5 trend:2 observed:1 role:2 kxk1:2 solved:3 hv:14 worst:1 capture:1 wj:2 ensures:3 ykp:2 prn:6 mentioned:2 substantial:1 convexity:5 complexity:27 grateful:1 solving:3 completely:1 packing:1 easily:5 alphabet:1 train:1 distinct:1 describe:5 query:1 shalev:6 encoded:1 widely:1 larger:1 say:4 otherwise:3 ability:1 statistic:1 gp:2 itself:1 online:2 differentiable:2 gq:1 adaptation:1 relevant:1 combining:1 achieve:2 getting:1 convergence:20 produce:2 ben:1 help:2 stating:1 progress:1 eq:5 strong:4 implies:5 radius:6 closely:1 stochastic:18 vc:1 require:3 fix:1 generalization:20 suffices:1 preliminary:1 theor:1 hold:4 practically:1 sufficiently:1 mapping:1 algorithmic:2 claim:1 major:2 smallest:1 omitted:1 a2:1 purpose:2 estimation:1 f21:1 tool:3 minimization:3 clearly:1 modified:2 corollary:2 derived:2 contrast:1 am:1 interested:1 selects:1 issue:1 classification:1 among:1 colt:2 almaden:1 denoted:1 development:1 special:1 fairly:1 equal:1 chernoff:1 fp0:3 icml:3 nearly:4 simplex:1 minimized:2 np:3 guzman:1 few:1 randomly:3 decodable:1 preserve:1 comprehensive:1 individual:1 replaced:2 n1:2 delicate:1 ab:5 fd:14 highly:1 dwork:2 introduces:1 hg:3 tuple:2 necessary:8 walk:1 desired:1 assignment:1 subset:5 uniform:25 kq:1 learnability:2 answer:2 recht:1 fundamental:1 siam:1 picking:1 together:2 w1:3 central:1 cesa:1 possibly:1 expert:1 stark:1 ullman:1 sec:3 bold:1 includes:1 explicitly:1 vi:8 view:1 kwk:1 observing:2 start:4 sco:15 fp1:1 contribution:1 minimize:3 square:1 accuracy:4 efficiently:7 generalize:3 produced:1 strongest:1 whenever:1 ed:1 proof:4 hamming:1 proved:1 treatment:1 popular:1 ask:3 hardt:3 holdout:1 reminder:1 improves:1 carefully:1 back:2 appears:1 supervised:1 strongly:4 just:2 hand:2 lack:1 defines:1 validity:1 regularization:11 hence:2 deal:2 covering:1 coincides:1 generalized:1 demonstrate:4 ef:2 overview:1 extend:3 refer:3 significant:1 cambridge:1 feldman:5 smoothness:8 rd:3 b2d:7 hp:2 inclusion:1 stable:1 longer:1 lkx:1 gj:5 add:1 pitassi:2 showed:1 recent:2 optimizing:3 optimizes:1 inf:1 binary:9 seen:1 minimum:5 additional:6 somewhat:1 gentile:1 preserving:1 strike:2 resolving:1 full:4 reduces:1 stem:1 smooth:15 match:2 faster:1 bach:1 impact:1 prediction:1 ensuring:2 regression:2 basic:5 sometimes:2 normalization:1 achieved:1 whereas:2 addition:1 background:1 annealing:1 crucial:2 appropriately:2 w2:4 comment:1 subject:1 vitaly:1 reingold:2 sridharan:7 integer:1 call:1 structural:1 easy:2 variety:1 affect:1 restrict:2 bpd:7 dlr:1 reduce:1 idea:2 escaping:1 computable:5 shift:3 whether:5 reuse:1 clarkson:1 f:21 algebraic:1 hessian:1 york:1 ignored:1 generally:1 tewari:1 detailed:1 narayanan:1 ken:1 reduced:1 hw1:1 shapiro:2 exist:1 restricts:1 zj:1 key:1 lan:1 groundbreaking:1 asymptotically:7 relaxation:1 subgradient:2 letter:1 soda:1 place:1 family:1 scaling:3 bit:2 bound:48 belloni:1 bp:1 bousquet:1 argument:4 min:2 vempala:1 relatively:1 developing:1 according:1 ball:8 remain:1 kakade:1 modification:3 making:1 pr:2 erm:29 fail:1 nonempty:1 singer:1 operation:1 observe:3 appropriate:1 batch:1 alternative:1 altogether:1 existence:2 original:1 thomas:1 ensure:2 maintaining:1 hinge:1 prof:1 objective:3 question:7 added:1 already:3 codewords:1 primary:1 dependence:8 minx:2 gradient:6 reversed:1 distance:2 simulated:1 rubinov:1 nissim:1 code:13 relationship:3 illustration:4 providing:1 demonstration:1 balance:1 equivalently:2 setup:8 liang:1 statement:3 stoc:2 stated:1 negative:1 mink:5 design:1 unknown:2 bianchi:1 upper:7 conversion:1 descent:3 orthant:3 extended:4 smoothed:1 arbitrary:1 thm:6 david:1 pair:1 required:2 namely:3 optimized:1 connection:1 quadratically:1 barcelona:1 nip:5 trans:1 address:2 encodable:1 program:2 max:7 including:1 event:1 natural:5 treated:1 regularized:1 rely:1 scheme:1 brief:1 imply:6 numerous:1 literature:1 understanding:4 acknowledgement:1 relative:7 loss:4 interesting:1 maxhw:1 limitation:1 srebro:3 h2:4 foundation:1 sufficient:1 editor:1 bypass:2 balancing:1 ibm:1 supported:3 formal:2 guide:1 understand:1 steinke:2 stemmer:1 distributed:1 dimension:7 commonly:2 made:1 coincide:1 adaptive:3 saa:1 transaction:2 compact:1 xi:16 shwartz:6 continuous:3 robust:1 interact:1 necessarily:1 domain:1 main:2 linearly:1 allowed:1 body:3 referred:2 bassily:1 wiley:1 sub:2 decoded:1 explicit:1 exponential:1 sasha:1 kxk2:1 jmlr:1 hw:7 theorem:6 specific:1 insightful:1 rakhlin:5 exists:10 vapnik:1 sequential:1 corr:5 importance:1 f20:1 kx:1 margin:1 gap:5 subtract:1 simply:2 bubeck:1 expressed:1 kxk:1 conconi:1 applies:1 springer:1 minimizer:3 relies:2 succeed:2 goal:4 consequently:1 exposition:1 lipschitz:26 replace:3 included:1 infinite:2 specifically:3 reducing:1 averaging:1 lemma:1 formally:5 support:4 spielman:2 constructive:1 |
6,044 | 6,468 | One-vs-Each Approximation to Softmax for Scalable
Estimation of Probabilities
Michalis K. Titsias
Department of Informatics
Athens University of Economics and Business
mtitsias@aueb.gr
Abstract
The softmax representation of probabilities for categorical variables plays a prominent role in modern machine learning with numerous applications in areas such as
large scale classification, neural language modeling and recommendation systems.
However, softmax estimation is very expensive for large scale inference because
of the high cost associated with computing the normalizing constant. Here, we
introduce an efficient approximation to softmax probabilities which takes the form
of a rigorous lower bound on the exact probability. This bound is expressed as a
product over pairwise probabilities and it leads to scalable estimation based on
stochastic optimization. It allows us to perform doubly stochastic estimation by
subsampling both training instances and class labels. We show that the new bound
has interesting theoretical properties and we demonstrate its use in classification
problems.
1
Introduction
Based on the softmax representation, the probability of a variable y to take the value k ? {1, . . . , K},
where K is the number of categorical symbols or classes, is modeled by
efk (x;w)
p(y = k|x) = PK
,
fm (x;w)
m=1 e
(1)
where each fk (x; w) is often referred to as the score function and it is a real-valued function indexed
by an input vector x and parameterized by w. The score function measures the compatibility of input
x with symbol y = k so that the higher the score is the more compatible x becomes with y = k. The
most common application of softmax is multiclass classification where x is an observed input vector
and fk (x; w) is often chosen to be a linear function or more generally a non-linear function such as a
neural network [3, 8]. Several other applications of softmax arise, for instance, in neural language
modeling for learning word vector embeddings [15, 14, 18] and also in collaborating filtering for
representing probabilities of (user, item) pairs [17]. In such applications the number of symbols
K could often be very large, e.g. of the order of tens of thousands or millions, which makes the
computation of softmax probabilities very expensive due to the large sum in the normalizing constant
of Eq. (1). Thus, exact training procedures based on maximum likelihood or Bayesian approaches
are computationally prohibitive and approximations are needed. While some rigorous bound-based
approximations to the softmax exists [5], they are not so accurate or scalable and therefore it would
be highly desirable to develop accurate and computationally efficient approximations.
In this paper we introduce a new efficient approximation to softmax probabilities which takes the
form of a lower bound on the probability of Eq. (1). This bound draws an interesting connection
between the exact softmax probability and all its one-vs-each pairwise probabilities, and it has several
desirable properties. Firstly, for the non-parametric estimation case it leads to an approximation of the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
likelihood that shares the same global optimum with exact maximum likelihood, and thus estimation
based on the approximation is a perfect surrogate for the initial estimation problem. Secondly, the
bound allows for scalable learning through stochastic optimization where data subsampling can be
combined with subsampling categorical symbols. Thirdly, whenever the initial exact softmax cost
function is convex the bound remains also convex.
Regarding related work, there exist several other methods that try to deal with the high cost of softmax
such as methods that attempt to perform the exact computations [9, 19], methods that change the
model based on hierarchical or stick-breaking constructions [16, 13] and sampling-based methods
[1, 14, 7, 11]. Our method is a lower bound based approach that follows the variational inference
framework. Other rigorous variational lower bounds on the softmax have been used before [4, 5],
however they are not easily scalable since they require optimizing data-specific variational parameters.
In contrast, the bound we introduce in this paper does not contain any variational parameter, which
greatly facilitates stochastic minibatch training. At the same time it can be much tighter than previous
bounds [5] as we will demonstrate empirically in several classification datasets.
2
One-vs-each lower bound on the softmax
Here, we derive the new bound on the softmax (Section 2.1) and we prove its optimality property when
performing approximate maximum likelihood estimation (Section 2.2). Such a property holds for the
non-parametric case, where we estimate probabilities of the form p(y = k), without conditioning
on some x, so that the score functions fk (x; w) reduce to unrestricted parameters fk ; see Eq. (2)
below. Finally, we also analyze the related bound derived by Bouchard [5] and we compare it with
our approach (Section 2.3).
2.1
Derivation of the bound
Consider a discrete random variable y ? {1, . . . , K} that takes the value k with probability,
efk
p(y = k) = Softmaxk (f1 , . . . , fK ) = PK
m=1
efm
,
(2)
where each fk is a free real-valued scalar parameter. We wish to express a lower bound on p(y = k)
and the key step of our derivation is to re-write p(y = k) as
p(y = k) =
1+
1
.
?(fk ?fm )
m6=k e
(3)
P
Then, by exploiting the fact that for any non-negative numbers ?1 and ?2 it P
holds 1 + Q
?1 + ?2 ?
1 + ?1 + ?2 + ?1 ?2 = (1 + ?1 )(1 + ?2 ), and more generally it holds (1 + i ?i ) ? i (1 + ?i )
where each ?i ? 0, we obtain the following lower bound on the above probability,
p(y = k) ?
Y
m6=k
1
1 + e?(fk ?fm )
=
Y
m6=k
Y
efk
=
?(fk ? fm ).
f
f
m
k
e +e
(4)
m6=k
where ?(?) denotes the sigmoid function. Clearly, the terms in the product are pairwise probabilities
each corresponding to the event y = k conditional on the union of pairs of events, i.e. y ? {k, m}
where m is one of the remaining values. We will refer to this bound as one-vs-each bound on the
softmax probability, since it involves K ? 1 comparisons of a specific event y = k versus each of the
K ? 1 remaining events. Furthermore, the above result can be stated more generally to define bounds
on arbitrary probabilities as the following statement shows.
Proposition 1. Assume a probability model with state space ? and probability measure P (?). For
any event A ? ? and an associated countable set of disjoint events {Bi } such that ?i Bi = ? \ A, it
holds
Y
P (A) ?
P (A|A ? Bi ).
(5)
i
PP
(A)
Proof. Given that P (A) = PP (A)
(?) = P (A)+ i P (Bi ) , the result follows by applying the inequality
P
Q
(1 + i ?i ) ? i (1 + ?i ) exactly as done above for the softmax parameterization.
2
Remark. If the set {Bi } consists of a single event B then by definition B = ? \ A and the bound is
exact since in such case P (A|A ? B) = P (A).
Furthermore, based on the above construction we can express a full class of hierarchically ordered
bounds. For instance, if we merge two events Bi and Bj into a single one, then the term P (A|A ?
Bi )P (A|A ? Bj ) in the initial bound is replaced with P (A|A ? Bi ? Bj ) and the associated new
bound, obtained after this merge, can only become tighter. To see a more specific example in the
softmax probabilistic model, assume a small subset of categorical symbols Ck , that does not include
k, and denote the remaining symbols excluding k as C?k so that k ? Ck ? C?k = {1, . . . , K}. Then, a
tighter bound, that exists higher in the hierarchy, than the one-vs-each bound (see Eq. 4) takes the
form,
Y
p(y = k) ? Softmaxk (fk , fCk ) ? Softmaxk (fk , fC?k ) ? Softmaxk (fk , fCk ) ?
?(fk ? fm ), (6)
m?C?k
where Softmaxk (fk , fCk ) =
fk
Pe
efk + m?C efm
and Softmaxk (fk , fC?k ) =
k
fk
Pe
efk + m?C? efm
. For sim-
k
plicity of our presentation in the remaining of the paper we do not discuss further these more general
bounds and we focus only on the one-vs-each bound.
The computationally useful aspect of the bound in Eq. (4) is that it factorizes into a product, where
each factor depends only on a pair of parameters (fk , fm ). Crucially, this avoids the evaluation of the
normalizing constant associated with the global probability in Eq. (2) and, as discussed in Section 3, it
leads to scalable training using stochastic optimization that can deal with very large K. Furthermore,
approximate maximum likelihood estimation based on the bound can be very accurate and, as shown
in the next section, it is exact for the non-parametric estimation case.
The fact that the one-vs-each bound in (4) is a product of pairwise probabilities suggests that there
is a connection with Bradley-Terry (BT) models [6, 10] for learning individual skills from paired
comparisons and the associated multiclass classification systems obtained by combining binary
classifiers, such as one-vs-rest and one-vs-one approaches [10]. Our method differs from BT models,
since we do not combine binary probabilistic models to a posteriori form a multiclass model. Instead,
we wish to develop scalable approximate algorithms that can surrogate the training of multiclass
softmax-based models by maximizing lower bounds on the exact likelihoods of these models.
2.2
Optimality of the bound for maximum likelihood estimation
Assume a set of observation (y1 , . . . , yN ) where each yi ? {1, . . . , K}. The log likelihood of the
data takes the form,
N
K
Y
Y
L(f ) = log
p(yi ) = log
p(y = k)Nk ,
(7)
i=1
k=1
where f = (f1 , . . . , fK ) and Nk denotes the number of data points with value k. By substituting
p(y = k) from Eq. (2) and then taking derivatives with respect to f we arrive at the standard stationary
conditions of the maximum likelihood solution,
efk
Nk
, k = 1, . . . , K.
(8)
=
PK
f
m
N
m=1 e
These stationary conditions are satisfied for fk = log Nk + c where c ? R is an arbitrary constant.
What is rather surprising is that the same solutions fk = log Nk + c satisfy also the stationary
conditions when maximizing a lower bound on the exact log likelihood obtained from the product of
one-vs-each probabilities.
More precisely, by replacing p(y = k) with the bound from Eq. (4) we obtain a lower bound on the
exact log likelihood,
?
?Nk
K
fk
Y
Y
X
e
?
? =
F(f ) = log
log P (fk , fm ),
(9)
efk + efm
k=1
where P (fk , fm ) =
h
fk
e
efk +efm
iNk h
m6=k
fm
e
efk +efm
k>m
iNm
is a likelihood involving only the data of the pair
of states (k, m), while there exist K(K ? 1)/2 possible such pairs. If instead of maximizing the exact
log likelihood from Eq. (7) we maximize the lower bound we obtain the same parameter estimates.
3
Proposition 2. The maximum likelihood parameter estimates fk = log Nk + c, k = 1, . . . , K for
the exact log likelihood from Eq. (7) globally also maximize the lower bound from Eq. (9).
Proof. By computing the derivatives of F(f ) we obtain the following stationary conditions
K ?1=
X Nk + Nm
efk
, k = 1, . . . , K,
Nk
efk + efm
(10)
m6=k
which form a system of K non-linear equations over the unknowns (f1 , . . . , fK ). By substituting
the values fk = log Nk + c we can observe that all K equations are simultaneously satisfied which
means that these values are solutions. Furthermore, since F(f ) is a concave function of f we can
conclude that the solutions fk = log Nk + c globally maximize F(f ).
Remark. Not only is F(f ) globally maximized by setting fk = log Nk + c, but also each pairwise
likelihood P (fk , fm ) in Eq. (9) is separately maximized by the same setting of parameters.
2.3
Comparison with Bouchard?s bound
Bouchard [5] proposed a related bound that next we analyze in terms of its ability to approximate the
exact maximum likelihood training in the non-parametric case, and then we compare it against our
method. Bouchard [5] was motivated by the problem of applying variational Bayesian inference to
multiclass classification and he derived the following upper bound on the log-sum-exp function,
log
K
X
m=1
efm ? ? +
K
X
log 1 + efm ?? ,
(11)
m=1
where ? ? R is a variational parameter that needs to be optimized in order for the bound to become
as tight as possible. The above induces a lower bound on the softmax probability p(y = k) from Eq.
(2) that takes the form
efk ??
p(y = k) ? QK
.
(12)
fm ?? )
m=1 (1 + e
This is not the same as Eq. (4), since there is not a value for ? for which the above bound will reduce
to our proposed one. For instance, if we set ? = fk , then Bouchard?s bound becomes half the one
in Eq. (4) due to the extra term 1 + efk ?fk = 2 in the product in the denominator.1 Furthermore,
such a value for ? may not be the optimal one and in practice ? must be chosen by minimizing
the upper bound in Eq. (11). While such an optimization is a convex problem, it requires iterative
optimization since there is not in general an analytical solution for ?. However, for the simple case
where K = 2 we can analytically find the optimal ? and the optimal f parameters. The following
proposition carries out this analysis and provides a clear understanding of how Bouchard?s bound
behaves when applied for approximate maximum likelihood estimation.
Proposition 3. Assume that K = 2 and we approximate the probabilities p(y = 1) and
ef1 ??
p(y = 2) from (2) with the corresponding Bouchard?s bounds given by (1+ef1 ??
and
)(1+ef2 ?? )
ef2 ??
.
(1+ef1 ?? )(1+ef2 ?? )
These bounds are used to approximate the maximum likelihood solution by
maximizing a bound F(f1 , f2 , ?) which is globally maximized for
?=
f1 + f2
, fk = 2 log Nk + c, k = 1, 2.
2
(13)
The proof of the above is given in the Supplementary material. Notice that the above estimates are
biased so that the probability of the most populated class (say the y = 1 for which N1 > N2 ) is
overestimated while the other probability is underestimated. This is due to the factor 2 that multiplies
log N1 and log N2 in (13).
2
Also notice that the solution ? = f1 +f
is not a general trend, i.e. for K > 2 the optimal ? is not the
2
mean of fk s. In such cases approximate maximum likelihood estimation based on Bouchard?s bound
requires iterative optimization. Figure 1a shows some estimated softmax probabilities, using a dataset
1
Notice that the product in Eq. (4) excludes the value k, while Bouchard?s bound includes it.
4
of 200 points each taking one out of ten values, where f is found by exact maximum likelihood, the
proposed one-vs-each bound and Bouchard?s method. As expected estimation based on the bound in
Eq. (4) gives the exact probabilities, while Bouchard?s bound tends to overestimate large probabilities
and underestimate small ones.
0.25
0.2
1
0.15
0
?100
0.1
Lower bound
Estimated Probability
?50
2
?150
?200
?1
?250
0.05
?2
0
1
2
3
4
5 6 7
Values
8
9
?300
0
10
?3
?2
?1
(a)
0
1
2
(b)
2000
4000
6000
Iterations
8000
10000
(c)
Figure 1: (a) shows the probabilities estimated by exact softmax (blue bar), one-vs-each approximation (red bar)
and Bouchard?s method (green bar). (b) shows the 5-class artificial data together with the decision boundaries
found by exact softmax (blue line), one-vs-each (red line) and Bouchard?s bound (green line). (c) shows the
maximized (approximate) log likelihoods for the different approaches when applied to the data of panel (b)
(see Section 3). Notice that the blue line in (c) is the exact maximized log likelihood while the remaining lines
correspond to lower bounds.
3
Stochastic optimization for extreme classification
Here, we return to the general form of the softmax probabilities as defined by Eq. (1) where the
score functions are indexed by input x and parameterized by w. We consider a classification task
where given a training set {xn , yn }N
n=1 , where yn ? {1, . . . , K}, we wish to fit the parameters w by
maximizing the log likelihood,
L = log
N
Y
efyn (xn ;w)
.
PK
fm (xn ;w)
n=1
m=1 e
(14)
When the number of training instances is very large, the above maximization can be carried out by
applying stochastic gradient descent (by minimizing ?L) where we cycle over minibatches. However,
this stochastic optimization procedure cannot deal with large values of K because the normalizing
constant in the softmax couples all scores functions so that the log likelihood cannot be expressed as
a sum across class labels. To overcome this, we can use the one-vs-each lower bound on the softmax
probability from Eq. (4) and obtain the following lower bound on the previous log likelihood,
F = log
N
Y
Y
n=1 m6=yn
1
1+
e?[fyn (xn ;w)?fm (xn ;w)]
=?
N X
X
log 1 + e?[fyn (xn ;w)?fm (xn ;w)]
n=1 m6=yn
(15)
which now
P consists of a sum over both data points and labels. Interestingly, the sum over the
labels, m6=yn , runs over all remaining classes that are different from the label yn assigned to xn .
Each term in the sum is a logistic regression cost, that depends on the pairwise score difference
fyn (xn ; w)?fm (xn ; w), and encourages the n-th data point to get separated from the m-th remaining
class. The above lower bound can be optimized by stochastic gradient descent by subsampling terms
in the double sum in Eq. (15), thus resulting in a doubly stochastic approximation scheme. Next we
further discuss the stochasticity associated with subsampling remaining classes.
The gradient for the cost associated with a single training instance (xn , yn ) is
X
?Fn =
? (fm (xn ; w) ? fyn (xn ; w)) [?w fyn (xn ; w) ? ?w fm (xn ; w)] .
(16)
m6=yn
This gradient consists of a weighted sum where the sigmoidal weights ? (fm (xn ; w) ? fyn (xn ; w))
quantify the contribution of the remaining classes to the whole gradient; the more a remaining
class overlaps with yn (given xn ) the higher its contribution is. A simple way to get an unbiased
stochastic estimate of (16) is to randomly subsample a small subset of remaining classes from the set
{m|m 6= yn }. More advanced schemes could be based on importance sampling where we introduce
5
a proposal distribution pn (m) defined on the set {m|m 6= yn } that could favor selecting classes with
large sigmoidal weights. While such more advanced schemes could reduce variance, they require
prior knowledge (or on-the-fly learning) about how classes overlap with one another. Thus, in Section
4 we shall experiment only with the simple random subsampling approach and leave the above
advanced schemes for future work.
To illustrate the above stochastic gradient descent algorithm we simulated a two-dimensional data set
of 200 instances, shown in Figure 1b, that belong to five classes. We consider a linear classification
model where the score functions take the form fk (xn , w) = wkT xn and where the full setPof
parameters is w = (w1 , . . . , wK ). We consider minibatchesP
of size ten to approximate the sum n
and subsets of remaining classes of size one to approximate m6=yn . Figure 1c shows the stochastic
evolution of the approximate log likelihood (dashed red line), i.e. the unbiased subsampling based
approximation of (15), together with the maximized exact softmax log likelihood (blue line), the
non-stochastically maximized approximate lower bound from (15) (red solid line) and Bouchard?s
method (green line). To apply Bouchard?s method we construct a lower bound on the log likelihood
by replacing each softmax probability with the bound from (12) where we also need to optimize a
separate variational parameter ?n for each data point. As shown in Figure 1c our method provides a
tighter lower bound than Bouchard?s method despite the fact that it does not contain any variational
parameters. Also, Bouchard?s method can become very slow when combined with stochastic gradient
descent since it requires tuning a separate variational parameter ?n for each training instance. Figure
1b also shows the decision boundaries discovered by the exact softmax, one-vs-each bound and
Bouchard?s bound. Finally, the actual parameters values found by maximizing the one-vs-each bound
were remarkably close (although not identical) to the parameters found by the exact softmax.
4
Experiments
4.1
Toy example in large scale non-parametric estimation
Here, we illustrate the ability to stochastically maximize the bound in Eq. (9) for the simple nonparametric estimation case. In such case, we can also maximize the bound based on the analytic
formulas and therefore we will be able to test how well the stochastic algorithm can approximate
the optimal/known solution. We consider a data set of N = 106 instances each taking one out of
K = 104 possible categorical values. The data were generated from a distribution p(k) ? u2k , where
each uk was randomly chosen in [0, 1]. The probabilities estimated based on the analytic formulas
are shown in Figure 2a. To stochastically estimate these probabilities we follow the doubly stochastic
framework of Section 3 so that we subsample data instances of minibatch size b = 100 and for each
instance we subsample 10 remaining categorical values. We use a learning rate initialized to 0.5/b
(and then decrease it by a factor of 0.9 after each epoch) and performed 2 ? 105 iterations. Figure
2b shows the final values for the estimated probabilities, while Figure 2c shows the evolution of the
estimation error during the optimization iterations. We can observe that the algorithm performs well
and exhibits a typical stochastic approximation convergence.
?4
?4
x 10
x 10
0.7
3.5
3
2.5
2
1.5
1
0.5
0
0
0.6
3
0.5
2.5
Error
Estimated Probability
Estimated Probability
3.5
2
4000
6000
Values
(a)
8000
10000
0.3
0.2
1
0.1
0.5
2000
0.4
1.5
0
0
2000
4000
6000
Values
(b)
8000
10000
0
0
0.5
1
Iterations
1.5
2
5
x 10
(c)
Figure 2: (a) shows the optimally estimated probabilities which have been sorted for visualizations purposes. (b)
shows the corresponding probabilities estimated by stochastic optimization. (c) shows the absolute norm for the
vector of differences between exact estimates and stochastic estimates.
4.2 Classification
Small scale classification comparisons. Here, we wish to investigate whether the proposed lower
bound on the softmax is a good surrogate for exact softmax training in classification. More precisely,
we wish to compare the parameter estimates obtained by the one-vs-each bound with the estimates
6
obtained by exact softmax training. To quantify closeness we use the normalized absolute norm
norm =
|wsoftmax ? w? |
,
|wsoftmax |
(17)
where wsoftmax denotes the parameters obtained by exact softmax training and w? denotes estimates
obtained by approximate training. Further, we will also report predictive performance measured by
classification error and negative log predictive density (nlpd) averaged across test data,
error = (1/Ntest )
N
test
X
I(yi 6= ti ),
nlpd = (1/Ntest )
i=1
N
test
X
? log p(ti |xi ),
(18)
i=1
where ti denotes the true label of a test point and yi the predicted one. We trained the linear multiclass
model of Section 3 with the following alternative methods: exact softmax training (SOFT), the onevs-each bound (OVE), the stochastically optimized one-vs-each bound (OVE - SGD) and Bouchard?s
bound (BOUCHARD). For all approaches, the associated cost function was maximized together with
an added regularization penalty term, ? 12 ?||w||2 , which ensures that the global maximum of the cost
function is achieved for finite w. Since we want to investigate how well we surrogate exact softmax
training, we used the same fixed value ? = 1 in all experiments.
We considered three small scale multiclass classification datasets: MNIST2 , 20 NEWS3 and BIBTEX
[12]; see Table 1 for details. Notice that BIBTEX is originally a multi-label classification dataset [2].
where each example may have more than one labels. Here, we maintained only a single label for each
data point in order to apply standard multiclass classification. The maintained label was the first label
appearing in each data entry in the repository files4 from which we obtained the data.
Figure 3 displays convergence of the lower bounds (and for the exact softmax cost) for all methods.
Recall, that the methods SOFT, OVE and BOUCHARD are non-stochastic and therefore their optimization can be carried out by standard gradient descent. Notice that in all three datasets the one-vs-each
bound gets much closer to the exact softmax cost compared to Bouchard?s bound. Thus, OVE tends to
give a tighter bound despite that it does not contain any variational parameters, while BOUCHARD has
N extra variational parameters, i.e. as many as the training instances. The application of OVE - SGD
method (the stochastic version of OVE) is based on a doubly stochastic scheme where we subsample
minibatches of size 200 and subsample remaining classes of size one. We can observe that OVE - SGD
is able to stochastically approach its maximum value which corresponds to OVE.
Table 2 shows the parameter closeness score from Eq. (17) as well as the classification predictive
scores. We can observe that OVE and OVE - SGD provide parameters closer to those of SOFT than the
parameters provided by BOUCHARD. Also, the predictive scores for OVE and OVE - SGD are similar to
SOFT, although they tend to be slightly worse. Interestingly, BOUCHARD gives the best classification
error, even better than the exact softmax training, but at the same time it always gives the worst nlpd
which suggests sensitivity to overfitting. However, recall that the regularization parameter ? was
fixed to the value one and it was not optimized separately for each method using cross validation.
Also notice that BOUCHARD cannot be easily scaled up (with stochastic optimization) to massive
datasets since it introduces an extra variational parameter for each training instance.
Large scale classification. Here, we consider AMAZONCAT-13 K (see footnote 4) which is a large
scale classification dataset. This dataset is originally multi-labelled [2] and here we maintained only
a single label, as done for the BIBTEX dataset, in order to apply standard multiclass classification.
This dataset is also highly imbalanced since there are about 15 classes having the half of the training
instances while they are many classes having very few (or just a single) training instances.
Further, notice that in this large dataset the number of parameters we need to estimate for the linear
classification model is very large: K ? (D + 1) = 2919 ? 203883 parameters where the plus one
accounts for the biases. All methods apart from OVE - SGD are practically very slow in this massive
dataset, and therefore we consider OVE - SGD which is scalable.
We applied OVE - SGD where at each stochastic gradient update we consider a single training instance
(i.e. the minibatch size was one) and for that instance we randomly select five remaining classes. This
2
http://yann.lecun.com/exdb/mnist
http://qwone.com/~jason/20Newsgroups/
4
http://research.microsoft.com/en-us/um/people/manik/downloads/XC/XMLRepository.
html
3
7
Table 1: Summaries of the classification datasets.
Name
Dimensionality
Classes
Training examples
Test examples
MNIST
20 NEWS
BIBTEX
AMAZONCAT-13 K
784
61188
1836
203882
10
20
148
2919
60000
11269
4880
1186239
10000
7505
2515
306759
Table 2: Score measures for the small scale classification datasets.
MNIST
20 NEWS
BIBTEX
SOFT
BOUCHARD
OVE
(error, nlpd)
(norm, error, nlpd)
(norm, error, nlpd)
OVE - SGD
(norm, error, nlpd)
(0.074, 0.271)
(0.272, 1.263)
(0.622, 2.793)
(0.64, 0.073, 0.333)
(0.65, 0.249, 1.337)
(0.25, 0.621, 2.955)
(0.50, 0.082, 0.287)
(0.05, 0.276, 1.297)
(0.09, 0.636, 2.888)
(0.53, 0.080, 0.278)
(0.14, 0.276, 1.312)
(0.10, 0.633, 2.875)
4
x 10
0
?6
?7
0
0.5
1
1.5
2
5
Iterations
x 10
(a)
?2000
?2500
?3000
Lower bound
SOFT
OVE
OVE?SGD
BOUCHARD
Lower bound
?4
Lower bound
Lower bound
?3
?5
?3000
?1500
?2
?4000
?5000
?400
?600
?800
?3500
?4000
0
?200
5
Iterations
?6000
0
10
5
x 10
(b)
5
Iterations
(c)
10
5
x 10
?1000
0
5
Iterations
10
5
x 10
(d)
Figure 3: (a) shows the evolution of the lower bound values for MNIST, (b) for 20 NEWS and (c) for BIBTEX. For
more clear visualization the bounds of the stochastic OVE - SGD have been smoothed using a rolling window of
400 previous values. (d) shows the evolution of the OVE - SGD lower bound (scaled to correspond to a single data
point) in the large scale AMAZONCAT-13 K dataset. Here, the plotted values have been also smoothed using a
rolling window of size 4000 and then thinned by a factor of 5.
leads to sparse parameter updates, where the score function parameters of only six classes (the class
of the current training instance plus the remaining five ones) are updated at each iteration. We used a
very small learning rate having value 10?8 and we performed five epochs across the full dataset, that
is we performed in total 5 ? 1186239 stochastic gradient updates. After each epoch we halve the
value of the learning rate before next epoch starts. By taking into account also the sparsity of the input
vectors each iteration is very fast and full training is completed in just 26 minutes in a stand-alone
PC. The evolution of the variational lower bound that indicates convergence is shown in Figure 3d.
Finally, the classification error in test data was 53.11% which is significantly better than random
guessing or by a method that decides always the most populated class (where in AMAZONCAT-13 K
the most populated class occupies the 19% of the data so the error of that method is around 79%).
5
Discussion
We have presented the one-vs-each lower bound on softmax probabilities and we have analyzed
its theoretical properties. This bound is just the most extreme case of a full family of hierarchically ordered bounds. We have explored the ability of the bound to perform parameter estimation
through stochastic optimization in models having large number of categorical symbols, and we have
demonstrated this ability to classification problems.
There are several directions for future research. Firstly, it is worth investigating the usefulness of the
bound in different applications from classification, such as for learning word embeddings in natural
language processing and for training recommendation systems. Another interesting direction is to
consider the bound not for point estimation, as done in this paper, but for Bayesian estimation using
variational inference.
Acknowledgments
We thank the reviewers for insightful comments. We would like also to thank Francisco J. R. Ruiz for
useful discussions and David Blei for suggesting the name one-vs-each for the proposed method.
8
References
[1] Yoshua Bengio and Jean-S?bastien S?n?cal. Quick training of probabilistic neural nets by importance
sampling. In Proceedings of the conference on Artificial Intelligence and Statistics (AISTATS), 2003.
[2] Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, and Prateek Jain. Sparse local embeddings
for extreme multi-label classification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and
R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 730?738. Curran
Associates, Inc., 2015.
[3] Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics).
Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006.
[4] D. Bohning. Multinomial logistic regression algorithm. Annals of the Inst. of Statistical Math, 44:197?200,
1992.
[5] Guillaume Bouchard. Efficient bounds for the softmax function and applications to approximate inference
in hybrid models. Technical report, 2007.
[6] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. The method of paired
comparisons. Biometrika, 39(3/4):324?345, 1952.
[7] Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul.
Fast and robust neural network joint models for statistical machine translation. In Proceedings of the
52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages
1370?1380, Baltimore, Maryland, June 2014. Association for Computational Linguistics.
[8] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. Book in preparation for MIT Press,
2016.
[9] Siddharth Gopal and Yiming Yang. Distributed training of large-scale logistic models. In Sanjoy Dasgupta
and David Mcallester, editors, Proceedings of the 30th International Conference on Machine Learning
(ICML-13), pages 289?297. JMLR Workshop and Conference Proceedings, 2013.
[10] Tzu-Kuo Huang, Ruby C. Weng, and Chih-Jen Lin. Generalized Bradley-Terry models and multi-class
probability estimates. J. Mach. Learn. Res., 7:85?115, December 2006.
[11] Shihao Ji, S. V. N. Vishwanathan, Nadathur Satish, Michael J. Anderson, and Pradeep Dubey. Blackout:
Speeding up recurrent neural network language models with very large vocabularies. 2015.
[12] Ioannis Katakis, Grigorios Tsoumakas, and Ioannis Vlahavas. Multilabel text classification for automated
tag suggestion. In In: Proceedings of the ECML/PKDD-08 Workshop on Discovery Challenge, 2008.
[13] Mohammad Emtiyaz Khan, Shakir Mohamed, Benjamin M. Marlin, and Kevin P. Murphy. A stickbreaking likelihood for categorical data analysis with latent Gaussian models. In Proceedings of the
Fifteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2012, La Palma,
Canary Islands, April 21-23, 2012, pages 610?618, 2012.
[14] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of
words and phrases and their compositionality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani,
and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111?3119.
Curran Associates, Inc., 2013.
[15] Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language
models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751?1758,
2012.
[16] F. Morin and Y. Bengio. Hierarchical probabilistic neural network language model. In Proceedings of the
Tenth International Workshop on Artificial Intelligence and Statistics, pages 246?252. Citeseer, 2005.
[17] Ulrich Paquet, Noam Koenigstein, and Ole Winther. Scalable Bayesian modelling of paired symbols.
CoRR, abs/1409.2824, 2012.
[18] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing
(EMNLP), pages 1532?1543, Doha, Qatar, October 2014. Association for Computational Linguistics.
[19] Sudheendra Vijayanarasimhan, Jonathon Shlens, Rajat Monga, and Jay Yagnik. Deep networks with large
output spaces. CoRR, abs/1412.7479, 2014.
9
| 6468 |@word repository:1 version:1 norm:6 nd:1 palma:1 crucially:1 jacob:1 citeseer:1 sgd:12 solid:1 carry:1 initial:3 qatar:1 score:13 selecting:1 bibtex:6 blackout:1 interestingly:2 bradley:3 current:1 com:3 surprising:1 must:1 john:1 fn:1 analytic:2 update:3 v:21 stationary:4 half:2 prohibitive:1 alone:1 item:1 parameterization:1 intelligence:3 blei:1 provides:2 math:1 firstly:2 sigmoidal:2 five:4 become:3 prove:1 doubly:4 consists:3 combine:1 thinned:1 introduce:4 pairwise:6 expected:1 pkdd:1 multi:4 globally:4 siddharth:1 actual:1 window:2 becomes:2 spain:1 provided:1 panel:1 katakis:1 what:1 prateek:1 marlin:1 nj:1 ti:3 concave:1 exactly:1 um:1 classifier:1 scaled:2 stick:1 uk:1 biometrika:1 schwartz:1 yn:13 overestimate:1 before:2 local:1 tends:2 despite:2 mach:1 merge:2 plus:2 downloads:1 suggests:2 bi:8 averaged:1 acknowledgment:1 lecun:1 union:1 practice:1 block:1 differs:1 procedure:2 area:1 empirical:1 significantly:1 sudheendra:1 word:4 morin:1 get:3 cannot:3 close:1 cal:1 vijayanarasimhan:1 applying:3 yee:1 optimize:1 dean:1 demonstrated:1 reviewer:1 maximizing:6 quick:1 economics:1 convex:3 tomas:1 shlens:1 varma:1 updated:1 annals:1 construction:2 play:1 hierarchy:1 user:1 exact:31 massive:2 curran:2 goodfellow:1 associate:2 trend:1 expensive:2 recognition:1 observed:1 role:1 fly:1 worst:1 thousand:1 ensures:1 cycle:1 news:3 decrease:1 benjamin:1 multilabel:1 trained:1 tight:1 ove:21 predictive:4 titsias:1 f2:2 easily:2 joint:1 derivation:2 separated:1 jain:2 fast:3 ef1:3 ole:1 artificial:4 bhatia:1 grigorios:1 kevin:1 u2k:1 jean:1 supplementary:1 valued:2 kai:1 say:1 ability:4 favor:1 statistic:4 paquet:1 final:1 shakir:1 analytical:1 net:1 product:7 combining:1 secaucus:1 exploiting:1 convergence:3 double:1 optimum:1 sutskever:1 perfect:1 leave:1 yiming:1 koenigstein:1 derive:1 develop:2 illustrate:2 recurrent:1 measured:1 sim:1 eq:23 predicted:1 involves:1 quantify:2 direction:2 stochastic:27 occupies:1 jonathon:1 mcallester:1 material:1 tsoumakas:1 require:2 zbib:1 f1:6 proposition:4 tighter:5 secondly:1 hold:4 practically:1 around:1 considered:1 exp:1 lawrence:1 bj:3 substituting:2 purpose:1 estimation:20 athens:1 label:13 stickbreaking:1 mtitsias:1 weighted:1 mit:1 clearly:1 always:2 gopal:1 gaussian:1 ck:2 rather:1 pn:1 factorizes:1 derived:2 focus:1 june:1 rank:1 likelihood:30 indicates:1 modelling:1 greatly:1 contrast:1 rigorous:3 posteriori:1 inference:5 inst:1 bt:2 compatibility:1 classification:29 html:1 multiplies:1 softmax:42 construct:1 having:4 sampling:3 identical:1 icml:1 future:2 report:2 yoshua:2 richard:2 few:1 modern:1 randomly:3 simultaneously:1 individual:1 murphy:1 replaced:1 jeffrey:1 n1:2 microsoft:1 attempt:1 ab:2 highly:2 investigate:2 mnih:1 evaluation:1 introduces:1 analyzed:1 extreme:3 weng:1 pradeep:1 pc:1 accurate:3 closer:2 indexed:2 incomplete:1 initialized:1 re:2 plotted:1 theoretical:2 inm:1 instance:18 modeling:2 soft:6 lamar:1 maximization:1 phrase:1 cost:9 shihao:1 subset:3 entry:1 rolling:2 usefulness:1 satish:1 gr:1 optimally:1 combined:2 density:1 international:4 sensitivity:1 winther:1 overestimated:1 probabilistic:5 lee:1 informatics:1 michael:1 together:3 ilya:1 w1:1 tzu:1 satisfied:2 nm:1 huang:2 emnlp:1 worse:1 stochastically:5 book:1 derivative:2 return:1 toy:1 account:2 suggesting:1 ioannis:2 wk:1 includes:1 inc:3 satisfy:1 depends:2 manik:2 performed:3 try:1 jason:1 analyze:2 red:4 start:1 bouchard:29 contribution:2 greg:1 qk:1 variance:1 maximized:8 correspond:2 emtiyaz:1 bayesian:4 worth:1 rabih:1 footnote:1 halve:1 whenever:1 definition:1 against:1 underestimate:1 pp:2 mohamed:1 associated:8 proof:3 couple:1 dataset:10 recall:2 knowledge:1 dimensionality:1 higher:3 originally:2 follow:1 april:1 done:3 anderson:1 furthermore:5 just:3 replacing:2 christopher:2 minibatch:3 logistic:3 name:2 usa:1 contain:3 unbiased:2 normalized:1 true:1 evolution:5 analytically:1 assigned:1 regularization:2 qwone:1 deal:3 during:1 encourages:1 maintained:3 generalized:1 prominent:1 whye:1 exdb:1 ruby:1 demonstrate:2 mohammad:1 performs:1 variational:14 common:1 sigmoid:1 behaves:1 multinomial:1 empirically:1 ji:1 conditioning:1 volume:1 million:1 thirdly:1 discussed:1 he:1 belong:1 association:3 refer:1 tuning:1 fk:36 populated:3 doha:1 stochasticity:1 sugiyama:1 language:7 setpof:1 imbalanced:1 purushottam:1 optimizing:1 apart:1 verlag:1 inequality:1 binary:2 kar:1 yagnik:1 meeting:1 yi:4 unrestricted:1 maximize:5 corrado:1 dashed:1 full:5 desirable:2 technical:1 cross:1 long:1 lin:1 zhongqiang:1 paired:3 scalable:9 involving:1 aueb:1 denominator:1 regression:2 fifteenth:1 iteration:10 monga:1 achieved:1 proposal:1 remarkably:1 separately:2 want:1 baltimore:1 underestimated:1 extra:3 rest:1 biased:1 wkt:1 comment:1 tend:1 facilitates:1 december:1 yang:1 bengio:3 embeddings:3 m6:11 newsgroups:1 automated:1 fit:1 fm:18 andriy:1 reduce:3 regarding:1 devlin:1 multiclass:9 whether:1 motivated:1 six:1 kush:1 penalty:1 york:1 remark:2 deep:2 generally:3 useful:2 clear:2 dubey:1 nonparametric:1 ten:3 induces:1 http:3 exist:2 notice:8 estimated:9 disjoint:1 blue:4 discrete:1 write:1 shall:1 dasgupta:1 express:2 key:1 tenth:1 excludes:1 sum:9 run:1 parameterized:2 arrive:1 family:1 chih:1 yann:1 draw:1 decision:2 bound:95 display:1 courville:1 annual:1 precisely:2 vishwanathan:1 tag:1 aspect:1 optimality:2 performing:1 mikolov:1 department:1 manning:1 makhoul:1 across:3 slightly:1 island:1 plicity:1 computationally:3 equation:2 visualization:2 remains:1 discus:2 needed:1 apply:3 observe:4 hierarchical:2 himanshu:1 appearing:1 vlahavas:1 alternative:1 weinberger:1 thomas:1 denotes:5 remaining:16 michalis:1 subsampling:7 include:1 completed:1 linguistics:3 xc:1 ghahramani:1 ink:1 added:1 parametric:5 surrogate:4 guessing:1 exhibit:1 gradient:10 separate:2 thank:2 simulated:1 maryland:1 nlpd:7 modeled:1 minimizing:2 october:1 statement:1 noam:1 negative:2 stated:1 design:1 countable:1 unknown:1 perform:3 teh:1 upper:2 observation:1 datasets:6 finite:1 descent:5 ecml:1 excluding:1 y1:1 discovered:1 smoothed:2 arbitrary:2 compositionality:1 david:2 pair:5 nadathur:1 khan:1 connection:2 optimized:4 barcelona:1 nip:1 efm:9 able:2 bar:3 below:1 pattern:1 sparsity:1 challenge:1 green:3 terry:3 event:8 overlap:2 business:1 natural:2 hybrid:1 advanced:3 representing:1 scheme:5 numerous:1 carried:2 categorical:8 canary:1 speeding:1 text:1 prior:1 understanding:1 epoch:4 discovery:1 interesting:3 suggestion:1 filtering:1 versus:1 validation:1 editor:3 ulrich:1 share:1 translation:1 compatible:1 summary:1 free:1 bias:1 bohning:1 burges:1 onevs:1 taking:4 absolute:2 sparse:2 distributed:2 boundary:2 overcome:1 xn:20 stand:1 avoids:1 vocabulary:1 welling:1 approximate:16 skill:1 global:4 overfitting:1 decides:1 investigating:1 conclude:1 francisco:1 xi:1 iterative:2 latent:1 table:4 learn:1 robust:1 efk:13 bottou:1 garnett:1 aistats:2 pk:4 hierarchically:2 whole:1 subsample:5 arise:1 n2:2 ef2:3 referred:1 en:1 slow:2 wish:5 pe:2 breaking:1 jmlr:1 jay:1 ruiz:1 ian:1 formula:2 minute:1 specific:3 bastien:1 bishop:1 jen:1 insightful:1 symbol:8 explored:1 cortes:1 normalizing:4 closeness:2 exists:2 workshop:3 mnist:4 socher:1 corr:2 importance:2 pennington:1 nk:13 chen:1 fc:2 fck:3 expressed:2 ordered:2 scalar:1 recommendation:2 springer:1 corresponds:1 collaborating:1 minibatches:2 conditional:1 sorted:1 presentation:1 labelled:1 jeff:1 change:1 typical:1 glove:1 total:1 sanjoy:1 kuo:1 ntest:2 la:1 aaron:1 select:1 guillaume:1 people:1 rajat:1 preparation:1 |
6,045 | 6,469 | Dual Learning for Machine Translation
Di He1,?, Yingce Xia2,? , Tao Qin3 , Liwei Wang1 , Nenghai Yu2 , Tie-Yan Liu3 , Wei-Ying Ma3
1
Key Laboratory of Machine Perception (MOE), School of EECS, Peking University
2
University of Science and Technology of China 3 Microsoft Research
1
{dih,wanglw}@cis.pku.edu.cn; 2 xiayingc@mail.ustc.edu.cn; 2 ynh@ustc.edu.cn
3
{taoqin,tie-yan.liu,wyma}@microsoft.com
Abstract
While neural machine translation (NMT) is making good progress in the past
two years, tens of millions of bilingual sentence pairs are needed for its training.
However, human labeling is very costly. To tackle this training data bottleneck, we
develop a dual-learning mechanism, which can enable an NMT system to automatically learn from unlabeled data through a dual-learning game. This mechanism is
inspired by the following observation: any machine translation task has a dual task,
e.g., English-to-French translation (primal) versus French-to-English translation
(dual); the primal and dual tasks can form a closed loop, and generate informative
feedback signals to train the translation models, even if without the involvement of
a human labeler. In the dual-learning mechanism, we use one agent to represent the
model for the primal task and the other agent to represent the model for the dual
task, then ask them to teach each other through a reinforcement learning process.
Based on the feedback signals generated during this process (e.g., the languagemodel likelihood of the output of a model, and the reconstruction error of the
original sentence after the primal and dual translations), we can iteratively update
the two models until convergence (e.g., using the policy gradient methods). We call
the corresponding approach to neural machine translation dual-NMT. Experiments
show that dual-NMT works very well on English?French translation; especially,
by learning from monolingual data (with 10% bilingual data for warm start), it
achieves a comparable accuracy to NMT trained from the full bilingual data for the
French-to-English translation task.
1
Introduction
State-of-the-art machine translation (MT) systems, including both the phrase-based statistical translation approaches [6, 3, 12] and the recently emerged neural networks based translation approaches
[1, 5], heavily rely on aligned parallel training corpora. However, such parallel data are costly to
collect in practice and thus are usually limited in scale, which may constrain the related research and
applications.
Given that there exist almost unlimited monolingual data in the Web, it is very natural to leverage
them to boost the performance of MT systems. Actually different methods have been proposed for this
purpose, which can be roughly classified into two categories. In the first category [2, 4], monolingual
corpora in the target language are used to train a language model, which is then integrated with the
MT models trained from parallel bilingual corpora to improve the translation quality. In the second
category [14, 11], pseudo bilingual sentence pairs are generated from monolingual data by using the
?
The first two authors contributed equally to this work. This work was conducted when the second author
was visiting Microsoft Research Asia.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
translation model trained from aligned parallel corpora, and then these pseudo bilingual sentence
pairs are used to enlarge the training data for subsequent learning. While the above methods could
improve the MT performance to some extent, they still suffer from certain limitations. The methods
in the first category only use the monolingual data to train language models, but do not fundamentally
address the shortage of parallel training data. Although the methods in the second category can
enlarge the parallel training data, there is no guarantee/control on the quality of the pseudo bilingual
sentence pairs.
In this paper, we propose a dual-learning mechanism that can leverage monolingual data (in both
the source and target languages) in a more effective way. By using our proposed mechanism, these
monolingual data can play a similar role to the parallel bilingual data, and significantly reduce the
requirement on parallel bilingual data during the training process. Specifically, the dual-learning
mechanism for MT can be described as the following two-agent communication game.
1. The first agent, who only understands language A, sends a message in language A to the
second agent through a noisy channel, which converts the message from language A to
language B using a translation model.
2. The second agent, who only understands language B, receives the translated message in
language B. She checks the message and notifies the first agent whether it is a natural
sentence in language B (note that the second agent may not be able to verify the correctness
of the translation since the original message is invisible to her). Then she sends the received
message back to the first agent through another noisy channel, which converts the received
message from language B back to language A using another translation model.
3. After receiving the message from the second agent, the first agent checks it and notifies
the second agent whether the message she receives is consistent with her original message.
Through the feedback, both agents will know whether the two communication channels (and
thus the two translation models) perform well and can improve them accordingly.
4. The game can also be started from the second agent with an original message in language B,
and then the two agents will go through a symmetric process and improve the two channels
(translation models) according to the feedback.
It is easy to see from the above descriptions, although the two agents may not have aligned bilingual
corpora, they can still get feedback about the quality of the two translation models and collectively
improve the models based on the feedback. This game can be played for an arbitrary number of
rounds, and the two translation models will get improved through this reinforcement procedure (e.g.,
by means of the policy gradient methods). In this way, we develop a general learning framework for
training machine translation models through a dual-learning game.
The dual learning mechanism has several distinguishing features. First, we train translation models
from unlabeled data through reinforcement learning. Our work significantly reduces the requirement
on the aligned bilingual data, and it opens a new window to learn to translate from scratch (i.e., even
without using any parallel data). Experimental results show that our method is very promising.
Second, we demonstrate the power of deep reinforcement learning (DRL) for complex real-world
applications, rather than just games. Deep reinforcement learning has drawn great attention in recent
years. However, most of them today focus on video or board games, and it remains a challenge to
enable DRL for more complicated applications whose rules are not pre-defined and where there is
no explicit reward signals. Dual learning provides a promising way to extract reward signals for
reinforcement learning in real-world applications like machine translation.
The remaining parts of the paper are organized as follows. In Section 2, we briefly review the
literature of neural machine translation. After that, we introduce our dual-learning algorithm for
neural machine translation. The experimental results are provided and discussed in Section 4. We
extend the breadth and depth of dual learning in Section 5 and discuss future work in the last section.
2
Background: Neural Machine Translation
In principle, our dual-learning framework can be applied to both phrase-based statistical machine
translation and neural machine translation. In this paper, we focus on the latter one, i.e., neural
2
machine translation (NMT), due to its simplicity as an end-to-end system, without suffering from
human crafted engineering [5].
Neural machine translation systems are typically implemented with a Recurrent Neural Network (RNN) based encoder-decoder framework. Such a framework learns a probabilistic mapping P (y|x) from
a source language sentence x = {x1 , x2 , ..., xTx } to a target language sentence y = {y1 , y2 , ..., yTy }
, in which xi and yt are the i-th and t-th words for sentences x and y respectively.
To be more concrete, the encoder of NMT reads the source sentence x and generates Tx hidden states
by an RNN:
hi = f (hi?1 , xi )
(1)
in which hi is the hidden state at time i, and function f is the recurrent unit such as Long Short-Term
Memory (LSTM) unit [12] or Gated Recurrent Unit (GRU) [3]. Afterwards, the decoder of NMT
computes the conditional probability of each target word yt given its proceeding words y<t , as well
as the source sentence, i.e., P (yt |y<t , x), which is then used to specify P (y|x) according to the
probability chain rule. P (yt |y<t , x) is given as:
P (yt |y<t , x) ? exp(yt ; rt , ct )
rt = g(rt?1 , yt?1 , ct )
ct = q(rt?1 , h1 , ? ? ? , hTx )
(2)
(3)
(4)
where rt is the decoder RNN hidden state at time t, similarly computed by an LSTM or GRU, and ct
denotes the contextual information in generating word yt according to different encoder hidden states.
ct can be a ?global? signal summarizing sentence x [3, 12], e.g., c1 = ? ? ? = cTy = hTx , or ?local?
PTx
i ,rt?1 )}
signal implemented by an attention mechanism [1], e.g., ct = i=1
?i hi , ?i = Pexp{a(h
exp{a(hj ,rt?1 )} ,
j
where a(?, ?) is a feed-forward neural network.
We denote all the parameters to be optimized in the neural network as ? and denote D as the dataset
that contains source-target sentence pairs for training. Then the learning objective is to seek the
optimal parameters ?? :
?
? = argmax
?
3
Ty
X X
log P (yt |y<t , x; ?)
(5)
(x,y)?D t=1
Dual Learning for Neural Machine Translation
In this section, we present the dual-learning mechanism for neural machine translation. Noticing
that MT can (always) happen in dual directions, we first design a two-agent game with a forward
translation step and a backward translation step, which can provide quality feedback to the dual
translation models even using monolingual data only. Then we propose a dual-learning algorithm,
called dual-NMT, to improve the two translation models based on the quality feedback provided in
the game.
Consider two monolingual corpora DA and DB which contain sentences from language A and B
respectively. Please note these two corpora are not necessarily aligned with each other, and they may
even have no topical relationship with each other at all. Suppose we have two (weak) translation
models that can translate sentences from A to B and verse visa. Our goal is to improve the accuracy
of the two models by using monolingual corpora instead of parallel corpora. Our basic idea is to
leverage the duality of the two translation models. Starting from a sentence in any monolingual data,
we first translate it forward to the other language and then further translate backward to the original
language. By evaluating this two-hop translation results, we will get a sense about the quality of the
two translation models, and be able to improve them accordingly. This process can be iterated for
many rounds until both translation models converge.
Suppose corpus DA contains NA sentences, and DB contains NB sentences. Denote P (.|s; ?AB )
and P (.|s; ?BA ) as two neural translation models, where ?AB and ?BA are their parameters (as
described in Section 2).
Assume we already have two well-trained language models LMA (.) and LMB (.) (which are easy to
obtain since they only require monolingual data), each of which takes a sentence as input and outputs
3
Algorithm 1 The dual-learning algorithm
1: Input: Monolingual corpora DA and DB , initial translation models ?AB and ?BA , language
models LMA and LMB , ?, beam search size K, learning rates ?1,t , ?2,t .
2: repeat
3:
t = t + 1.
4:
Sample sentence sA and sB from DA and DB respectively.
5:
Set s = sA .
. Model update for the game beginning from A.
6:
Generate K sentences smid,1 , . . . , smid,K using beam search according to translation model
P (.|s; ?AB ).
7:
for k = 1, . . . , K do
8:
Set the language-model reward for the kth sampled sentence as r1,k = LMB (smid,k ).
9:
Set the communication reward for the kth sampled sentence as r2,k =
log P (s|smid,k ; ?BA ).
10:
Set the total reward of the kth sample as rk = ?r1,k + (1 ? ?)r2,k .
11:
end for
12:
Compute the stochastic gradient of ?AB :
K
1 X
?
[rk ??AB log P (smid,k |s; ?AB )].
??AB E[r] =
K
k=1
13:
Compute the stochastic gradient of ?BA :
K
1 X
?
[(1 ? ?)??BA log P (s|smid,k ; ?BA )].
??BA E[r] =
K
k=1
14:
Model updates:
?
?
?BA ? ?BA + ?2,t ??BA E[r].
?AB ? ?AB + ?1,t ??AB E[r],
15:
Set s = sB .
. Model update for the game beginning from B.
16:
Go through line 6 to line 14 symmetrically.
17: until convergence
a real value to indicate how confident the sentence is a natural sentence in its own language. Here the
language models can be trained either using other resources, or just using the monolingual data DA
and DB .
For a game beginning with sentence s in DA , denote smid as the middle translation output. This
middle step has an immediate reward r1 = LMB (smid ), indicating how natural the output sentence
is in language B. Given the middle translation output smid , we use the log probability of s recovered
from smid as the reward of the communication (we will use reconstruction and communication
interchangeably). Mathematically, reward r2 = log P (s|smid ; ?BA ).
We simply adopt a linear combination of the LM reward and communication reward as the total
reward, e.g., r = ?r1 + (1 ? ?)r2 , where ? is a hyper-parameter. As the reward of the game can
be considered as a function of s, smid and translation models ?AB and ?BA , we can optimize the
parameters in the translation models through policy gradient methods for reward maximization, as
widely used in reinforcement learning [13].
We sample smid according to the translation model P (.|s; ?AB ). Then we compute the gradient of
the expected reward E[r] with respect to parameters ?AB and ?BA . According to the policy gradient
theorem [13], it is easy to verify that
??BA E[r] = E[(1 ? ?)??BA log P (s|smid ; ?BA )]
(6)
??AB E[r] = E[r??AB log P (smid |s; ?AB )]
(7)
in which the expectation is taken over smid .
Based on Eqn.(6) and (7), we can adopt any sampling approach to estimate the expected gradient.
Considering that random sampling brings very large variance and sometimes unreasonable results in
4
Table 1: Translation results of En?Fr task. The results of the experiments using all the parallel data
for training are provided in the first two columns (marked by ?Large?), and the results using 10%
parallel data for training are in the last two columns (marked by ?Small?).
NMT
pseudo-NMT
dual-NMT
En?Fr (Large)
Fr?En (Large)
En?Fr (Small)
Fr?En (Small)
29.92
30.40
32.06
27.49
27.66
29.78
25.32
25.63
28.73
22.27
23.24
27.50
machine translation [9, 12, 10], we use beam search [12] to obtain more meaningful results (more
reasonable middle translation outputs) for gradient computation, i.e., we greedily generate top-K
high-probability middle translation outputs, and use the averaged value on the beam search results
to approximate the true gradient. If the game begins with sentence s in DB , the computation of the
gradient is just symmetric and we omit it here.
The game can be repeated for many rounds. In each round, one sentence is sampled from DA and
one from DB , and we update the two models according to the game beginning with the two sentences
respectively. The details of this process are given in Algorithm 1.
4
Experiments
We conducted a set of experiments to test the proposed dual-learning mechanism for neural machine
translation.
4.1
Settings
We compared our dual-NMT approach with two baselines: the standard neural machine translation
[1] (NMT for short), and a recent NMT-based method [11] which generates pseudo bilingual sentence
pairs from monolingual corpora to assist training (pseudo-NMT for short). We leverage a tutorial
NMT system implemented by Theano for all the experiments. 2
We evaluated our algorithm on the translation task of a pair of languages: English?French (En?Fr)
and French?English (Fr?En). In detail, we used the same bilingual corpora from WMT?14
as used in [1, 5], which contains 12M sentence pairs extracting from five datasets: Europarl v7,
Common Crawl corpus, UN corpus, News Commentary, and 109 French-English corpus. Following
common practices, we concatenated newstest2012 and newstest2013 as the validation set, and used
newstest2014 as the testing set. We used the ?News Crawl: articles from 2012? provided by WMT?14
as monolingual data.
We used the GRU networks and followed the practice in [1] to set experimental parameters. For each
language, we constructed the vocabulary with the most common 30K words in the parallel corpora,
and out-of-vocabulary words were replaced with a special token <UNK>. For monolingual corpora,
we removed the sentences containing at least one out-of-vocabulary words. Each word was projected
into a continuous vector space of 620 dimensions, and the dimension of the recurrent unit was 1000.
We removed sentences with more than 50 words from the training set. Batch size was set as 80 with
20 batches pre-fetched and sorted by sentence lengths.
For the baseline NMT model, we exactly followed the settings reported in [1]. For the baseline
pseudo-NMT [11], we used the trained NMT model to generate pseudo bilingual sentence pairs from
monolingual data, removed the sentences with more than 50 words, merged the generated data with
the original parallel training data, and then trained the model for testing. Each of the baseline models
was trained with AdaDelta [15] on K40m GPU until their performances stopped to improve on the
validation set.
Our method needs a language model for each language. We trained an RNN based language model
[7] for each language using its corresponding monolingual corpus. Then the language model was
2
dl4mt-tutorial: https://github.com/nyu-dl
5
Table 2: Reconstruction performance of En?Fr task
NMT
pseudo-NMT
dual-NMT
En?Fr?En (L)
Fr?En?Fr (L)
En?Fr?En (S)
Fr?En?Fr (S)
39.92
38.15
51.84
45.05
45.41
54.65
28.28
30.07
48.94
32.63
34.54
50.38
fixed and the log likelihood of a received message was used to reward the communication channel
(i.e., the translation model) in our experiments.
While playing the game, we initialized the channels using warm-start translation models (e.g., trained
from bilingual data corpora), and see whether dual-NMT can effectively improve the machine
translation accuracy. In our experiments, in order to smoothly transit from the initial model trained
from bilingual data to the model training purely from monolingual data, we adopted the following
soft-landing strategy. At the very beginning of the dual learning process, for each mini batch, we
used half sentences from monolingual data and half sentences from bilingual data (sampled from
the dataset used to train the initial model). The objective was to maximize the weighted sum of the
reward based on monolingual data defined in Section 3 and the likelihood on bilingual data defined in
Section 2. When the training process went on, we gradually increased the percentage of monolingual
sentences in the mini batch, until no bilingual data were used at all. Specifically, we tested two
settings in our experiments:
? In the first setting (referred to Large), we used all the 12M bilingual sentences pairs during
the soft-landing process. That is, the warm start model was learnt based on full bilingual
data.
? In the second setting (referred to Small), we randomly sampled 10% of the 12M bilingual
sentences pairs and used them during the soft-landing process.
For each of the settings we trained our dual-NMT algorithm for one week. We set the beam search
size to be 2 in the middle translation process. All the hyperparameters in the experiments were set by
cross validation.We used the BLEU score [8] as the evaluation metric, which are computed by the
multi-bleu.perl script3 . Following the common practice, during testing we used beam search [12]
with beam size of 12 for all the algorithms as in many previous works.
4.2
Results and Analysis
We report the experimental results in this section. Recall that the two baselines for English?French
and French?English are trained separately while our dual-NMT conducts joint training. We summarize the overall performances in Table 1 and plot the BLEU scores with respect to the length of
source sentences in Figure 1.
From Table 1 we can see that our dual-NMT algorithm outperforms the baseline algorithms in all
the settings. For the translation from English to French, dual-NMT outperforms the baseline NMT
by about 2.1/3.4 points for the first/second warm start setting, and outperforms pseudo-NMT by
about 1.7/3.1 points for both settings. For the translation from French to English, the improvement is
more significant: our dual-NMT outperforms NMT by about 2.3/5.2 points for the first/second warm
start setting, and outperforms pseudo-NMT by about 2.1/4.3 points for both settings. Surprisingly,
with only 10% bilingual data, dual-NMT achieves comparable translation accuracy as vanilla NMT
using 100% bilingual data for the Fr?En task. These results demonstrate the effectiveness of our
dual-NMT algorithm. Furthermore, we have the following observations:
? Although pseudo-NMT outperforms NMT, its improvements are not very significant. Our
hypothesis is that the quality of pseudo bilingual sentence pairs generated from the monolingual data is not very good, which limits the performance gain of pseudo-NMT. One might
need to carefully select and filter the generated pseudo bilingual sentence pairs to get better
performance for pseudo-NMT.
3
https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl
6
Table 3: Cases study of the translation-back-translation (TBT) performance during dual-NMT training
Translation-back-translation results
before dual-NMT training
Source (En)
En?Fr
En?Fr?En
Source (Fr)
Fr?En
Fr?En?Fr
Translation-back-translation results
after dual-NMT training
The majority of the growth in the years to come will come from its
liquefied natural gas schemes in Australia.
La plus grande partie de la croisLa majorit? de la croissance dans
-sance des ann?es ? venir viendra
les ann?es ? venir viendra de ses
de ses syst?mes de gaz naturel
r?gimes de gaz naturel liqu?fi?
liqu?fi? en Australie .
en Australie .
Most of the growth of future
The majority of growth in the
years will come from its liquefied
coming years will come from its
natural gas systems in Australia .
liquefied natural gas systems
in Australia .
Il pr?cise que " les deux cas identifi?s en mai 2013 restent donc
les deux seuls cas confirm?s en France ? ce jour " .
He noted that " the two cases
He states that " the two cases
identified in May 2013 therefore
identified in May 2013 remain the
remain the only two two confirmed
only two confirmed cases in France
cases in France to date " .
to date "
Il a not? que " les deux cas
Il pr?cise que " les deux cas
identifi?sen mai 2013 demeurent
identifi?s en mai 2013 restent les
donc les deux seuls deux deux cas
seuls deux cas confirm?s en France
confirm?s en France ? ce jour "
? ce jour ".
? When the parallel bilingual data are small, dual-NMT makes larger improvement. This
shows that the dual-learning mechanism makes very good utilization of monolingual data.
Thus we expect dual-NMT will be more helpful for language pairs with smaller labeled
parallel data. Dual-NMT opens a new window to learn to translate from scratch.
We plot BLEU scores with respect to the length of source sentences in Figure 1. From the figure, we
can see that our dual-NMT algorithm outperforms the baseline algorithms in all the ranges of length.
We make some deep studies on our dual-NMT algorithm in Table 2. We study the self-reconstruction
performance of the algorithms: For each sentence in the test set, we translated it forth and back using
the models and then checked how close the back translated sentence is to the original sentence using
the BLEU score. We also used beam search to generate all the translation results. It can be easily
seen from Table 2 that the self-reconstruction BLEU scores of our dual-NMT are much higher than
NMT and pseudo-NMT. In particular, our proposed method outperforms NMT by about 11.9/9.6
points when using warm-start model trained on large parallel data, and outperforms NMT for about
20.7/17.8 points when using the warm-start model trained on 10% parallel data.
We list several example sentences in Table 3 to compare the self-reconstruction results of models
before and after dual learning. It is quite clear that after dual learning, the reconstruction is largely
improved for both directions, i.e., English?French?English and French?English?French.
To summarize, all the results show that the dual-learning mechanism is promising and better utilizes
the monolingual data.
5
Extensions
In this section, we discuss the possible extensions of our proposed dual learning mechanism.
7
First, although we have focused on machine translation in this work, the basic idea of dual learning is
generally applicable: as long as two tasks are in dual form, we can apply the dual-learning mechanism
to simultaneously learn both tasks from unlabeled data using reinforcement learning algorithms.
Actually, many AI tasks are naturally in dual form, for example, speech recognition versus text
to speech, image caption versus image generation, question answering versus question generation
(e.g., Jeopardy!), search (matching queries to documents) versus keyword extraction (extracting
keywords/queries for documents), so on and so forth. It would very be interesting to design and test
dual-learning algorithms for more dual tasks beyond machine translation.
Second, although we have focused on dual learning on two tasks, our technology is not restricted to
two tasks only. Actually, our key idea is to form a closed loop so that we can extract feedback signals
by comparing the original input data with the final output data. Therefore, if more than two associated
tasks can form a closed loop, we can apply our technology to improve the model in each task from
unlabeled data. For example, for an English sentence x, we can first translate it to a Chinese sentence
y, then translate y to a French sentence z, and finally translate z back to an English sentence x0 . The
similarity between x and x0 can indicate the effectiveness of the three translation models in the loop,
and we can once again apply the policy gradient methods to update and improve these models based
on the feedback signals during the loop. We would like to name this generalized dual learning as
close-loop learning, and will test its effectiveness in the future.
34
32
32
30
30
28
28
BLEU
BLEU
26
26
24
24
22
22
20
18
16
<10
20
NMT (Large)
dual?NMT (Large)
NMT (Small)
dual?NMT (Small)
[10,20)
[20,30)
[30,40)
[40,50)
[50,60)
NMT (Large)
dual?NMT (Large)
NMT (Small)
dual?NMT (Small)
18
16
<10
>60
Source Sentence Length
[10,20)
[20,30)
[30,40)
[40,50)
[50,60)
>60
Source Sentence Length
(a) En?Fr
(b) Fr?En
Figure 1: BLEU scores w.r.t lengths of source sentences
6
Future Work
We plan to explore the following directions in the future. First, in the experiments we used bilingual
data to warm start the training of dual-NMT. A more exciting direction is to learn from scratch, i.e.,
to learn translations directly from monolingual data of two languages (maybe plus lexical dictionary).
Second, our dual-NMT was based on NMT systems in this work. Our basic idea can also be applied
to phrase-based SMT systems and we will look into this direction. Third, we only considered a pair
of languages in this paper. We will extend our approach to jointly train multiple translation models
for a tuple of 3+ languages using monolingual data.
Acknowledgement
This work was partially supported by National Basic Research Program of China (973 Program)
(grant no. 2015CB352502), NSFC (61573026) and the MOE?Microsoft Key Laboratory of Statistics
and Machine Learning, Peking University. We would like to thank Yiren Wang, Fei Tian, Li Zhao
and Wei Chen for helpful discussions, and the anonymous reviewers for their valuable comments on
our paper.
8
References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. ICLR, 2015.
[2] T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean. Large language models in machine
translation. In In Proceedings of the Joint Conference on Empirical Methods in Natural
Language Processing and Computational Natural Language Learning. Citeseer, 2007.
[3] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and
Y. Bengio. Learning phrase representations using rnn encoder?decoder for statistical machine
translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724?1734, Doha, Qatar, October 2014. Association for
Computational Linguistics.
[4] C. Gulcehre, O. Firat, K. Xu, K. Cho, L. Barrault, H.-C. Lin, F. Bougares, H. Schwenk, and
Y. Bengio. On using monolingual corpora in neural machine translation. arXiv preprint
arXiv:1503.03535, 2015.
[5] S. Jean, K. Cho, R. Memisevic, and Y. Bengio. On using very large target vocabulary for
neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics and the 7th International Joint Conference on Natural Language
Processing (Volume 1: Long Papers), pages 1?10, Beijing, China, July 2015. Association for
Computational Linguistics.
[6] P. Koehn, F. J. Och, and D. Marcu. Statistical phrase-based translation. In Proceedings of the
2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 48?54. Association for Computational
Linguistics, 2003.
[7] T. Mikolov, M. Karafi?t, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network
based language model. In INTERSPEECH, volume 2, page 3, 2010.
[8] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of
machine translation. In Proceedings of the 40th annual meeting on association for computational
linguistics, pages 311?318. Association for Computational Linguistics, 2002.
[9] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural
networks. arXiv preprint arXiv:1511.06732, 2015.
[10] A. M. Rush, S. Chopra, and J. Weston. A neural attention model for abstractive sentence
summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural
Language Processing, pages 379?389, Lisbon, Portugal, September 2015. Association for
Computational Linguistics.
[11] R. Sennrich, B. Haddow, and A. Birch. Improving neural machine translation models with
monolingual data. In ACL, 2016.
[12] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In
Advances in neural information processing systems, pages 3104?3112, 2014.
[13] R. S. Sutton, D. A. McAllester, S. P. Singh, Y. Mansour, et al. Policy gradient methods for
reinforcement learning with function approximation. In NIPS, volume 99, pages 1057?1063,
1999.
[14] N. Ueffing, G. Haffari, and A. Sarkar. Semi-supervised model adaptation for statistical machine
translation. Machine Translation Journal, 2008.
[15] M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701,
2012.
9
| 6469 |@word middle:6 briefly:1 open:2 seek:1 citeseer:1 initial:3 liu:1 contains:4 score:6 qatar:1 document:2 past:1 outperforms:9 recovered:1 com:3 contextual:1 comparing:1 jeopardy:1 gpu:1 subsequent:1 happen:1 informative:1 plot:2 update:6 half:2 accordingly:2 beginning:5 short:3 provides:1 barrault:1 five:1 constructed:1 htx:2 drl:2 introduce:1 x0:2 expected:2 roughly:1 multi:2 inspired:1 automatically:1 window:2 considering:1 spain:1 provided:4 begin:1 guarantee:1 pseudo:17 tackle:1 growth:3 tie:2 exactly:1 zaremba:1 control:1 unit:4 utilization:1 omit:1 grant:1 och:2 before:2 engineering:1 local:1 limit:1 sutton:1 nsfc:1 might:1 plus:2 acl:1 china:3 collect:1 limited:1 range:1 tian:1 averaged:1 testing:3 practice:4 procedure:1 rnn:5 yan:2 empirical:3 significantly:2 liwei:1 xtx:1 matching:1 pre:2 word:10 burget:1 get:4 unlabeled:4 close:2 nb:1 cb352502:1 optimize:1 landing:3 dean:1 reviewer:1 yt:9 lexical:1 go:2 attention:3 starting:1 focused:2 simplicity:1 rule:2 deux:8 target:6 play:1 heavily:1 today:1 suppose:2 caption:1 distinguishing:1 hypothesis:1 adadelta:2 recognition:1 marcu:1 identifi:3 labeled:1 role:1 preprint:3 wang:1 news:2 went:1 keyword:1 ranzato:1 removed:3 valuable:1 reward:16 trained:15 singh:1 purely:1 translated:3 easily:1 joint:3 schwenk:2 chapter:1 tx:1 train:6 effective:1 query:2 labeling:1 hyper:1 que:3 whose:1 emerged:1 widely:1 larger:1 quite:1 jean:1 s:2 koehn:1 encoder:4 statistic:1 ward:1 jointly:2 noisy:2 final:1 blob:1 sequence:3 sen:1 reconstruction:7 propose:2 coming:1 fr:23 dans:1 adaptation:1 aligned:5 loop:6 date:2 translate:9 papineni:1 forth:2 description:1 sutskever:1 convergence:2 lmb:4 requirement:2 r1:4 generating:1 pexp:1 develop:2 recurrent:6 keywords:1 school:1 received:3 progress:1 sa:2 implemented:3 indicate:2 come:4 direction:5 merged:1 filter:1 stochastic:2 human:4 australia:3 enable:2 mcallester:1 require:1 anonymous:1 merrienboer:1 mathematically:1 extension:2 lma:2 considered:2 exp:2 great:1 mapping:1 week:1 lm:1 achieves:2 adopt:2 dictionary:1 purpose:1 applicable:1 correctness:1 weighted:1 always:1 rather:1 newstest2013:1 hj:1 focus:2 she:3 improvement:3 likelihood:3 check:2 greedily:1 baseline:8 summarizing:1 wang1:1 sense:1 helpful:2 sb:2 integrated:1 typically:1 her:2 hidden:4 france:5 tao:1 overall:1 dual:67 wanglw:1 unk:1 plan:1 art:1 special:1 once:1 extraction:1 enlarge:2 sampling:2 labeler:1 hop:1 look:1 future:5 report:1 fundamentally:1 randomly:1 simultaneously:1 national:1 replaced:1 argmax:1 microsoft:4 ab:17 message:12 evaluation:2 abstractive:1 primal:4 chain:1 tuple:1 conduct:1 pku:1 initialized:1 rush:1 stopped:1 increased:1 column:2 soft:3 maximization:1 phrase:5 conducted:2 reported:1 eec:1 learnt:1 cho:4 confident:1 jour:3 lstm:2 international:1 memisevic:1 probabilistic:1 receiving:1 concrete:1 na:1 again:1 containing:1 emnlp:1 v7:1 american:1 zhao:1 li:1 syst:1 de:7 north:1 script:1 h1:1 closed:3 start:8 parallel:18 complicated:1 il:3 accuracy:4 variance:1 who:2 largely:1 weak:1 iterated:1 tbt:1 confirmed:2 classified:1 sennrich:1 taoqin:1 checked:1 verse:1 ty:1 naturally:1 associated:1 di:1 sampled:5 gain:1 dataset:2 nenghai:1 ask:1 birch:1 recall:1 organized:1 carefully:1 actually:3 back:8 understands:2 feed:1 higher:1 supervised:1 asia:1 specify:1 wei:2 improved:2 evaluated:1 furthermore:1 just:3 until:5 receives:2 eqn:1 web:1 french:15 brings:1 quality:7 newstest2014:1 name:1 verify:2 y2:1 contain:1 true:1 read:1 symmetric:2 laboratory:2 iteratively:1 round:4 game:17 during:7 interchangeably:1 please:1 self:3 noted:1 interspeech:1 generalized:1 demonstrate:2 invisible:1 image:2 recently:1 fi:2 common:4 mt:6 volume:4 million:1 discussed:1 extend:2 he:2 association:8 bougares:2 significant:2 ai:1 rd:1 vanilla:1 doha:1 automatic:1 similarly:1 portugal:1 language:44 wmt:2 similarity:1 haddow:1 align:1 own:1 recent:2 involvement:1 certain:1 meeting:2 seen:1 commentary:1 converge:1 maximize:1 signal:8 semi:1 july:1 full:2 afterwards:1 multiple:1 reduces:1 cross:1 long:3 lin:1 equally:1 peking:2 basic:4 expectation:1 metric:1 arxiv:6 represent:2 sometimes:1 ma3:1 c1:1 beam:8 background:1 separately:1 source:12 sends:2 nmt:64 smt:2 comment:1 yu2:1 db:7 bahdanau:2 effectiveness:3 call:1 extracting:2 chopra:2 leverage:4 symmetrically:1 bengio:4 easy:3 identified:2 reduce:1 idea:4 cn:3 bottleneck:1 whether:4 assist:1 suffer:1 speech:2 deep:3 generally:1 clear:1 shortage:1 maybe:1 ten:1 category:5 generate:5 http:2 mai:3 exist:1 percentage:1 tutorial:2 moses:1 key:3 drawn:1 ce:3 breadth:1 backward:2 ptx:1 year:5 convert:2 sum:1 beijing:1 noticing:1 master:1 almost:1 reasonable:1 utilizes:1 fetched:1 comparable:2 hi:4 ct:6 followed:2 played:1 annual:2 constrain:1 fei:1 x2:1 unlimited:1 generates:2 mikolov:1 according:7 combination:1 remain:2 smaller:1 visa:1 karafi:1 making:1 gradually:1 pr:2 theano:1 restricted:1 taken:1 resource:1 remains:1 discus:2 mechanism:14 needed:1 know:1 dih:1 end:3 adopted:1 gulcehre:2 unreasonable:1 apply:3 generic:1 liu3:1 batch:4 notifies:2 original:8 denotes:1 remaining:1 top:1 linguistics:8 zeiler:1 brant:1 concatenated:1 especially:1 chinese:1 objective:2 already:1 question:2 strategy:1 costly:2 yty:1 rt:7 visiting:1 september:1 gradient:13 kth:3 iclr:1 thank:1 decoder:4 majority:2 me:1 transit:1 mail:1 extent:1 bleu:11 length:7 relationship:1 mini:2 newstest2012:1 ying:1 october:1 teach:1 ba:17 design:2 policy:6 summarization:1 contributed:1 perform:1 gated:1 observation:2 datasets:1 gas:3 immediate:1 communication:7 y1:1 topical:1 auli:1 mansour:1 arbitrary:1 yingce:1 sarkar:1 pair:15 moe:2 gru:3 sentence:58 optimized:1 boost:1 barcelona:1 nip:2 address:1 able:2 beyond:1 haffari:1 usually:1 perception:1 challenge:1 summarize:2 program:2 perl:2 including:1 memory:1 video:1 power:1 natural:12 warm:8 rely:1 cernock:1 lisbon:1 zhu:1 scheme:1 improve:12 github:2 technology:4 started:1 extract:2 text:1 review:1 literature:1 acknowledgement:1 expect:1 monolingual:30 generation:2 limitation:1 interesting:1 he1:1 versus:5 validation:3 agent:17 consistent:1 article:1 principle:1 exciting:1 playing:1 roukos:1 translation:87 token:1 repeat:1 last:2 surprisingly:1 english:16 supported:1 van:1 feedback:10 k40m:1 depth:1 vocabulary:4 world:2 evaluating:1 crawl:2 computes:1 dimension:2 author:2 forward:3 reinforcement:9 projected:1 adaptive:1 approximate:1 confirm:3 global:1 corpus:21 xi:2 search:8 un:1 continuous:1 mosesdecoder:1 grande:1 table:8 promising:3 learn:6 channel:6 ca:6 languagemodel:1 improving:1 complex:1 necessarily:1 da:7 smid:16 bilingual:28 hyperparameters:1 suffering:1 repeated:1 x1:1 xu:2 crafted:1 referred:2 en:30 board:1 explicit:1 answering:1 third:1 learns:1 rk:2 theorem:1 popat:1 r2:4 nyu:1 list:1 dl:1 effectively:1 ci:1 chen:1 smoothly:1 simply:1 explore:1 vinyals:1 khudanpur:1 partially:1 collectively:1 weston:1 conditional:1 goal:1 marked:2 sorted:1 ann:2 cise:2 specifically:2 called:1 total:2 duality:1 experimental:4 la:3 e:2 meaningful:1 indicating:1 select:1 ustc:2 latter:1 tested:1 scratch:3 europarl:1 |
6,046 | 647 | Second order derivatives for network
pruning: Optimal Brain Surgeon
Babak Hassibi* and David G. Stork
Ricoh California Research Center
2882 Sand Hill Road, Suite 115
Menlo Park, CA 94025-7022
stork@crc.ricoh.com
and
* Department of Electrical Engineering
Stanford University
Stanford, CA 94305
Abstract
We investigate the use of information from all second order derivatives of the error
function to perfonn network pruning (i.e., removing unimportant weights from a trained
network) in order to improve generalization, simplify networks, reduce hardware or
storage requirements, increase the speed of further training, and in some cases enable rule
extraction. Our method, Optimal Brain Surgeon (OBS), is Significantly better than
magnitude-based methods and Optimal Brain Damage [Le Cun, Denker and Sol1a, 1990],
which often remove the wrong weights. OBS permits the pruning of more weights than
other methods (for the same error on the training set), and thus yields better
generalization on test data. Crucial to OBS is a recursion relation for calculating the
inverse Hessian matrix H-I from training data and structural information of the net. OBS
permits a 90%, a 76%, and a 62% reduction in weights over backpropagation with weighL
decay on three benchmark MONK's problems [Thrun et aI., 1991]. Of OBS, Optimal
Brain Damage, and magnitude-based methods, only OBS deletes the correct weights from
a trained XOR network in every case. Finally, whereas Sejnowski and Rosenberg [1987J
used 18,000 weights in their NETtalk network, we used OBS to prune a network to just
1560 weights, yielding better generalization.
1 Introduction
A central problem in machine learning and pattern recognition is to minimize the system complexity
(description length, VC-dimension, etc.) consistent with the training data. In neural networks this
regularization problem is often cast as minimizing the number of connection weights. Without such weight
elimination overfilting problems and thus poor generalization will result. Conversely, if there are too few
weights, the network might not be able to learn the training data.
If we begin with a trained network having too many weights, the questions then become: Which weights
should be eliminated? How should the remaining weights be adjusted for best performance? How can such
network pruning be done in a computationally efficient way?
164
Second order derivatives for network pruning: Optimal Brain Surgeon
Magnitude based methods [Hertz, Krogh and Palmer, 1991] eliminate weights that have the smallest
magnitude. This simple and naively plausible idea unfortunately often leads to the elimination of the wrong
weights - small weights can be necessary for low error. Optimal Brain Damage [Le Cun, Denker and
Solla, 1990] uses the criterion of minimal increase in training error for weight elimination. For
computational simplicity, OBD assumes that the Hessian matrix is diagonal: in fact. however, Hessians for
every problem we have considered are strongly non-diagonal, and this leads OBD to eliminate the wrong
weights. The superiority of the method described here - Optimal Brain Surgeon - lies in great pan to the
fact that it makes no restrictive assumptions about the form of the network's Hessian, and thereby
eliminates the correct weights. Moreover, unlike other methods, OBS does not demand (typically slow)
retraining after the pruning of a weight.
2 Optimal Brain Surgeon
In deriving our method we begin, as do Le Cun, Denker and Solla [1990], by considering a network trained
to a local minimum in error. The functional Taylor series of the error with respect to weights (or
parameters, see below) is:
(1)
=
E/ aw
2 is the Hessian matrix (containing all second order derivatives) and the superscript
where H ;]2
T denotes vector transpose. For a network trained to a local minimum in error, the first (linear) term
vanishes: we also ignore the third and all higher order terms. Our goal is then to set one of the weights to
zero (which we call wq) to minimize the increase in error given by Eq. l. Eliminating Wq is expressed as:
owq+w q =0
ormoregenerally e~ ?OW+Wq =0
(2)
where eq is the unit vector in weight space corresponding to (scalar) weight wq ? Our goal is then to solve:
Minq {Mint5w
l! OWT . H . ow}
such that
e~. ow + W q = O}
(3)
To solve Eq. 3 we form a Lagrangian from Eqs. 1 and 2:
L = 1- ow T . H . ow + A(e~ . ow + W q)
(4)
where A. is a Lagrange undetermined multiplier. We take functional derivatives, employ the constraints of
Eq. 2, and use matrix inversion to find that the optimal weight change and resulting change in error are:
ow = -
w
q H- 1 ? e
[H- 1] qq
q
and
w2
q
2 [H- 1] qq
1
L =q
(5)
Note that neither H nor H? I need be diagonal (as is assumed by Le Cun et al.): moreover, our method
recalculates the magnitude of all the weights in the network, by the left side of Eq. 5. We call Lq the
"saliency" of weight q - the increase in error that results when the weight is eliminated - a definition
more general than Le Cun et al. 's, and which includes theirs in the special case of diagonal H.
Thus we have the following algorithm:
Optimal Brain Surgeon procedure
1. Train a "reasonably large" network to minimum error.
2. Compute H? I .
3. Find the q that gives the smallest saliency Lq = Wq 2/(2[H? I ]qq). If this candidate error
increase is much smaller than E, then the qth weight should be deleted, and we
proceed to step 4; otherwise go to step 5. (Other stopping criteria can be used too.)
4. Use the q from step 3 to update all weights (Eq. 5). Go to step 2.
5. No more weights can be deleted without large increase in E. (At this point it may be
desirable to retrain the network.)
Figure 1 illustrates the basic idea. The relative magnitudes of the error after pruning (before retraining. if
any) depend upon the particular problem, but to second order obey: E(mag) ~ E(OBD) ~ E(OBS). which is
the key to the superiority of OBS. In this example OBS and OBD lead to the elimination of the same
weight (weight 1). In many cases, however. OBS will eliminate different weights than those eliminated by
OBD (cf. Sect. 6). We call our method Optimal Brain Surgeon because in addition to deleting weights, it
165
166
Hassibi and Stork
calculates and changes the strengths of other weights without the need for gradient descent or other
incremental retraining.
Figure 1: Error as a function of two weights in a
network. The (local) minimum occurs at weight
w?, found by gradient descent or other learning
method. In this illustration, a magnitude based
pruning technique (mag) then removes the
smallest weight, weight 2; Optimal Brain
Damage before retraining (OBD) removes
weight I. In contrast, our Optimal Brain
Surgeon method (OBS) not only removes weight
I, but also automatically adjusts the value of
weight 2 to minimize the error, without
retraining. The error surface here is general in
that it has different curvatures (second
derivatives) along different directions, a
minimum at a non-special weight value, and a
non-diagonal Hessian (i.e., principal axes are not
parallel to the weight axes). We have found (to
our surprise) that every problem we have
investigated has strongly non-diagonal Hessians
- thereby explaining the improvment of our
method over that of Le Cun et al.
3 Computing the inverse Hessian
The difficulty appears to be step 2 in the OBS procedure, since inverting a matrix of thousands or millions
of terms seems computationally intractable. In what follows we shall give a general derivation of the
inverse Hessian for a fully trained neural network. It makes no difference whether it was trained by
backpropagation, competitive learning, the Boltzmann algorithm, or any other method, so long as
derivatives can be taken (see below). We shall show that the Hessian can be reduced to the sample
covariance matrix associated with certain gradient vectors. Furthennore, the gradient vectors necessary for
OBS are normally available at small computational cost; the covariance form of the Hessian yields a
recursive formula for computing the inverse.
Consider a general non-linear neural network that maps an input vector in of dimension nj into an output
vector 0 of dimension no' according to the following:
F(w,in)
0=
(6)
where w is an n dimensional vector representing the neural network's weights or other parameters. We
shall refer to w as a weight vector below for simplicity and definiteness, but it must be stressed that w could
represent any continuous parameters, such as those describing neural transfer function, weight sharing, and
so on. The mean square error corresponding to the training set is dermed as:
E = _1 i(t[k] _ o[k]{ (t[k] _ o[k])
(7)
2P k=1
where P is the number of training patterns, and t lk] and olk] are the desired response and network response
for the kth training pattern. The first derivative with respect to w is:
aE = _!
(Jw
i
P k=1
aF(w,in[k) (t[k) _ o[k])
dw
(8)
and the second derivative or Hessian is:
a2E 1 P aF(w in[k]) aF(w,in[k]{
H=--2 =- L[
,
dw
P k=1
(Jw
(Jw
:l2
(J
? [k]
F(w,m
(Jw2
) . (t[k) _ o[k)]
(9)
Second order derivatives for network pruning: Optimal Brain Surgeon
Next we consider a network fully trained to a local minimum in error at w*. Under this condition the
network response O[k] will be close to the desired response t[k], and hence we neglect the tenn involving
(t[k]- ork]). Even late in pruning, when this error is not small for a single pattern, this approximation can be
justified (see next Section). This simplification yields:
H
=! f
dF(w,in[k]). dF(w,in[k) T
dw
p k=1
(10)
dw
If out network has just a single output, we may define the n-dimensional data vector Xrk] of derivatives as:
= dF(w,in[k])
(11)
=! fX[k). X[k]T
(12)
X[k)
H
Thus Eq. 10 can be written as:
aw
P k=1
If instead our network has mUltiple output units, then X will be an n x no matrix of the fonn:
X [k] = dF(w,in[k]) = (dF1(w,in[k])
aw
aw
.....
dF no (W,in[k]?
aw
[kJ
[k]
= (Xl ?...? Xno)
(13)
where F j is the ith component of F. Hence in this multiple output unit case Eq. 10 generalizes to:
H
=! f rx~k). X~k]T
(14)
P k=ll=1
Equations 12 and 14 show that H is the sample covariance matrix associated with the gradient variable X.
Equation 12 also shows that for the single output case we can calculate the full Hessian by sequentially
adding in successive "component" Hessians as:
H m+l- H m + .!.X[m+I].
X[m+l]T with HO = aI and Hp = H
P
(15)
But Optimal Brain Surgeon requires the inverse of H (Eq. 5). This inverse can be calculated using a
standard matrix inversion fonnula [Kailath, 1980]:
(A + 8 . C . 0)-1 = A-I - A-I. 8 . (C- I + D. A-I. 8)-1 . D . A-I
(16)
applied to each tenn in the analogous sequence in Eq. 16:
H- I
m+1 -
H- I m
H-1 . X[m+1) . X[m+1)T . H- I
m
m
p + x[m+I)T . H- I . X[m+lI
with
HOi = a-II and Hpl = H- I
(17)
m
and a (l0? 8 S a S 10-4) a small constant needed to make H O? I meaningful, and to which our method is
insensitive [Hassibi, Stork and Wolff, 1993b]. Actually, Eq. 17 leads to the calculation of the inverse of
(H + ciI), and this corresponds to the introduction of a penalty term allliwll 2 in Eq. 4. This has the benefit
of penalizing large candidate jumps in weight space, and thus helping to insure that the neglecting of higher
order Lenns in Eq. 1 is valid.
Equation 17 permits the calculation of H? I using a single sequential pass through the training data
1 S m S P. It is also straightforward to generalize Eq. 18 to the multiple output case of Eq. 15: in this case
Eq. 15 will have recursions on both the indices m and I giving:
X[m)T
H m 1+1 -- H ml + -1 x[m)
1+1' 1+1
P
H
m+11
=H
milo
+!
(18)
x[m+1) . X[m+I]T
P 1
I
To sequentially calculate U- I for the multiple output case, we use Eq. 16, as before.
4 The (t - 0)
~
0 approximation
The approximation used for Eq. 10 can be justified on computational and functional grounds, even late in
pruning when the training error is not negligible. From the computational view, we note [rrst that nonnally
H is degenerate - especially before significant pruning has been done - and its inverse not well defined.
167
168
Hassibi and Stork
The approximation guarantees that there are no singularities in the calculation of H-1? It also keeps the
computational complexity of calculating H- 1 the same as that for calculating H - O(p n 2 ). In Statistics the
approximation is the basis of Fisher's method of scoring and its goal is to replace the true Hessian with its
expected value and guarantee that H is positive definite (thereby avoiding stability problems that can
plague Gauss-Newton methods) [Seber and Wild, 1989].
Equally important are the functional justifications of the approximation. Consider a high capactiy network
trained to small training error. We can consider the network structure as involving both signal and noise.
As we prune, we hope to eliminate those weights that lead to "overfilting," i.e., learning the noise. If our
pruning method did not employ the (t - 0) ~ 0 approximation, every pruning step (Eqs. 9 and 5) would
inject the noise back into the system, by penalizing for noise tenns. A different way to think of the
approximation is the following. After some pruning by OBS we have reached a new weight vector that is a
local minimum of the error (cf. Fig. 1). Even if this error is not negligible, we want to stay as close to that
value of the error as we can. Thus we imagine a new, effective teaching signal t*, that would keep the
network near this new error minimum. It is then (t* - 0) that we in effect set to zero when using Eq. 10
instead of Eq. 9.
5 aBS and back propagation
Using the standard tennino)ogy from backpropagation [Rumelhart, Hinton and Williams, 1986J and the
single output network of Fig. 2, it is straightforward to show from Eq. 11 that the derivative vectors are:
X[k] --
(x~]J
[k]
(19)
Xu
(20)
where
refers to derivatives with respect to hidden-to-output weights Vj and
[X~.t)]T = (f' (net[.t)f (net\.t)v\.t)o~!L .... f' (net[.t)f (net\.t)v~.t)o~~) ... . ,
f (net[.t)f' (net~.t)v~~)o\.t) ..... f (net(.t)f (net~.t.)V~.t.)o~~l)
J
J
J
J
(21)
I
refers to derivatives with respect to input-to-hidden weights uji' and where lexicographical ordering has
been used. The neuron nonlinearity is f(?).
output
hidden
input
i = n?1
Figure 2: Backpropagation net with lli inputs and nj hidden units. The input-to-hidden
weights are Uji and hidden-to-output weights Vj. The derivative ("data") vectors are Xv
and Xu (Eqs. 20 and 21).
6 Simulation results
We applied OBS, Optimal Brain Damage, and a magnitude based pruning method to the 2-2-1 network
with bias unit of Fig. 3, trained on all patterns of the XOR problem. The network was first trained to a local
minimum, which had zero error, and then the three methods were used to prune one weight. As shown,the
methods deleted different weights. We then trained the original XOR network from different initial
conditions, thereby leading to a different local minima. Whereas there were some cases in which OBD or
magnitude methods deleted the correct weight, only OBS deleted the correct weight in every case.
Moreover, OBS changed the values of the remaining weights (Eq.5) to achieve perfect perfonnance
without any retraining by the backpropagation algorithm. Figure 4 shows the Hessian of the trained but
unpruned XOR network.
Second order derivatives for network pruning: Optimal Brain Surgeon
Figure 3: A nine weight XOR network trained
output
to a local minimum. The thickness of the lines
indicates the weight magnitudes, and inhibitory
weights are shown dashed. Subsequent pruning
using a magnitude based method (Mag) would
delete weight v3; using Optimal Brain Damage
(OBD) would delete U22. Even with retraining,
the network pruned by those methods cannot
learn the XOR problem. In contrast, Optimal
Brain Surgeon (OBS) deletes U23 and furthennore
changed all other weights (cf. Eq. 5) to achieve
zero error on the XOR problem.
hidden
input
Figure 4:
The Hessian of the trained but
unpruned XOR network, calculated by means of
Eq. 12. White represents large values and black
small magnitudes. The rows and columns are
labeled by the weights shown in Fig. 3. As is to
be expected, the hidden-to-output weights have
significant Hessian components. Note especially
that the Hessian is far from being diagonal. The
Hessians for all problems we have investigated,
including the MONK's problems (below), are far
from being diagonal.
VI
v 2 v3 UII U l2 u13 u21 u22 U23
Figure 5 shows two-dimensional "slices" of the nine-dimensional error surface in the neighborhood of a
local minimum at w? for the XOR network. The cuts compare the weight elimination of Magnitude
methods (left) and OBD (right) with the elimination and weight adjustment given by OBS.
E
E
U23
o
-1
-2
V3
u22
Figure 5: (Left) the XOR error surface as a function of weights V3 and U23 (cf. Fig. 4). A
magnitude based pruning method would delete weight V3 whereas OBS deletes U23.
(Right) The XOR error surface as a function of weights U22 and U23. Optimal Brain
Damage would delete U22 whereas OBS deletes U23. For this minimum, only deleting U23
will allow the pruned network to solve the XOR problem.
169
170
Hassibi and Stork
After all network weights are updated by Eq. 5 the system is at zero error (not shown). It is especially
noteworthy that in neither case of pruning by magnitude methods nor Optimal Brain Damage will further
retraining by gradient descent reduce the training error to zero. In short, magnitude methods and Optimal
Brain Damage delete the wrong weights, and their mistake cannot be overcome by further network training.
Only Optimal Brain Surgeon deletes the correct weight.
We also applied OBS to larger problems, three MONK's problems, and compared our results to those of
Thrun et al. [1991], whose backpropagation network outperformed all other approaches (network and rulebased) on these benchmark problems in an extensive machine learning competition.
Accuracy
MONKl
MONK 2
MONK 3
training
testing
100
100
100
100
93.4
93.4
100
100
100
100
97.2
97.2
BPWD
aBS
BPWD
aBS
BPWD
aBS
# weights
58
14
39
15
39
4
Table 1: The accuracy and number of weights found by backpropagation with weight
decay (BPWD) found by Thrun etal. [1991], and by OBS on three MONK's problems.
Table I shows that for the same perfonnance, OBS (without retraining) required only 24%, 38% and 10%
of the weights of the backpropagation network, which was already regularized with weight decay (Fig. 6).
The error increaseL (Eq. 5) accompanying pruning by OBS negligibly affected accuracy.
.'..... .-'...... .....
",
_-
.... ....,
I.
l-
Figure 6: Optimal networks found by Thrun using backpropagation with weight decay
(Left) and by OBS (Right) on MONK I, which is based on logical rules. Solid (dashed)
lines denote excitatory (inhibitory) connections; bias units are at left.
The dramatic reduction in weights achieved by OBS yields a network that is simple enough that the logical
rules that generated the data can be recovered from the pruned network, for instance by the methods of
Towell and Shavlik [1992]. Hence OBS may help to address a criticism often levied at neural networks:
the fact that they may be unintelligible.
We applied OBS to a three-layer NETtalk network. While Sejnowski and Rosenberg [1987] used 18,000
weights, we began with just 5546 weights, which after backpropagation training had a test error of 5259.
After pruning this net with OBS to 2438 weights, and then retraining and pruning again, we achieved a net
with only 1560 weights and test error of only 4701 - a significant improvement over the original, more
complex network [Hassibi, Stork and Wolff, 1993a]. Thus OBS can be applied to real-world pattern
recognition problems such as speech recognition and optical character recognition, which typically have
several thousand parameters.
7 Analysis and conclusions
Why is Optimal Brain Surgeon so successful at reducing excess degrees of freedom? Conversely, given
this new standard in weight elimination, we can ask: Why are magnitude based methods so poor?
Consider again Fig. 1. Starting from the local minimum at w?, a magnitude based method deletes the
wrong weight, weight 2, and through retraining, weight 1 will increase. The final "solution" is
weight 1 4 large, weight 2 =O. This is precisely the opposite of the solution found by OBS: weight 1 =0,
weight 2 4 large. Although the actual difference in error shown in Fig. 1 may be small, in large networks,
differences from many incorrect weight elimination decisions can add up to a significant increase in error.
Second order derivatives for network pruning: Optimal Brain Surgeon
But most importantly, it is simply wishful thinking to believe that after the elimination of many incorrect
weights by magnitude methods the net can "sort it all out" through further training and reach a global
optimum, especially if the network has already been pruned significantly (cf. XOR discussion, above).
We have also seen how the approximation employed by Optimal Brain Damage - that the diagonals of the
Hessian are dominant - does not hold for the problems we have investigated. There are typically many
off-diagonal terms that are comparable to their diagonal counterparts. This explains why OBD often
deletes the wrong weight, while OBS deletes the correct one.
We note too that our method is quite general, and subsumes previous methods for weight elimination. In
our terminology, magnitude based methods assume isotropic Hessian (H ex I); OBD assumes diagonal H:
FARM [Kung and Hu, 1991] assumes linear f(net) and only updates the hidden-to-output weights. We
have shown that none of those assumptions are valid nor sufficient for optimal weight elimination.
We should also point out that our method is even more general than presented here [Hassibi, Stork and
Wotff, 1993bl. For instance, rather than pruning a weight (parameter) by setting it to zero, one can instead
reduce a degree of freedom by projecting onto an arbitrary plane, e.g., Wq = a constant, though such
networks typically have a large description length [Rissanen, 1978]. The pruning constraint wq =0
discussed throughout this paper makes retraining (if desired) particularly simple. Several weights can be
deleted simultaneously; bias weights can be exempt from pruning, and so forth. A slight generalization of
OBS employs cross-entropy or the Kullback-Leibler error measure, leading to Fisher Infonnation matrix
rather than the Hessian (Hassibi, Stork and Wolff, 1993b). We note too that OBS does not by itself give a
criterion for when to stop pruning, and thus OBS can be utilized with a wide variety of such criteria.
Moreover, gradual methods such as weight decay during learning can be used in conjunction with OBS.
Acknowledgements
The first author was supported in part by grants AFOSR 91-0060 and DAAL03-91-C-OOlO to T. Kailath,
who in tum provided constant encouragement Deep thanks go to Greg Wolff (Ricoh) for assistance with
simulations and analysis, and Jerome Friedman (Stanford) for pointers to relevant statistics literature.
REFERENCES
Hassibi, B. Stork, D. G. and Wolff, G. (1993a). Optimal Brain Surgeon and general network pruning
(submitted to ICNN, San Francisco)
Hassibi, B. Stork, D. G. and Wolff, G. (1993b). Optimal Brain Surgeon, Information Theory and network
capacity control (in preparation)
Hertz, J., Krogh, A. and Palmer, R. G. (1991). Introduction to the Theory of Neural Computation
Addison-Wesley.
Kailath, T. (1980). Linear Systems Prentice-Hall.
Kung, S. Y. and Hu, Y. H. (1991). A Frobenius approximation reduction method (FARM) for detennining
the optimal number of hidden units, Proceedings of the IJCNN-9I Seattle, Washington.
Le Cun, Y., Denker, J. S. and SoUa, S. A. (1990). Optimal Brain Damage, in Proceedings of the Neural
Information Processing Systems-2, D. S. Touretzky (ed.) 598-605, Morgan-Kaufmann.
Rissanen, J. (1978). Modelling by shortest data description, Aulomatica 14,465-471.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning Internal representations by error
propagation, Chapter 8 (318-362) in Parallel Distributed Processing I D. E. Rumelhart and J. L.
McClelland (eds.) MIT Press.
Seber, G. A. F. and Wild, C. J. (1989). Nonlinear Regression 35-36 Wiley.
Sejnowski, T. J., and Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text,
Complex Syslems I, 145-168.
Thrun, S. B. and 23 co-authors (1991). The MONK's Problems - A perfonnance comparison of different
learning algorithms, CMU-CS-91-197 Carnegie-Mellon U. Department of Computer ScienceTech
Report.
Towell, G. and Shavlik, J. W. (1992). Interpretation of artificial neural networks: Mapping knowledgebased neural networks into rules, in Proceedings of the Neural In/ormation Processing Systems-4, ].
E. Moody, D. S. Touretzky and R. P. Lippmann (eds.) 977-984, Morgan-Kaufmann.
171
| 647 |@word eliminating:1 inversion:2 seems:1 retraining:12 hu:2 simulation:2 gradual:1 covariance:3 fonn:1 dramatic:1 thereby:4 solid:1 reduction:3 initial:1 series:1 mag:3 qth:1 recovered:1 com:1 must:1 written:1 subsequent:1 remove:4 unintelligible:1 update:2 tenn:2 monk:8 isotropic:1 plane:1 ith:1 short:1 pointer:1 successive:1 along:1 become:1 incorrect:2 wild:2 expected:2 nor:3 brain:28 automatically:1 actual:1 considering:1 begin:2 provided:1 moreover:4 insure:1 what:1 nj:2 suite:1 perfonn:1 guarantee:2 every:5 wrong:6 control:1 unit:7 normally:1 grant:1 superiority:2 xno:1 before:4 negligible:2 engineering:1 local:10 positive:1 xv:1 mistake:1 noteworthy:1 might:1 black:1 conversely:2 co:1 palmer:2 lexicographical:1 pronounce:1 testing:1 recursive:1 definite:1 backpropagation:10 procedure:2 significantly:2 road:1 refers:2 cannot:2 close:2 onto:1 storage:1 prentice:1 fonnula:1 ogy:1 map:1 lagrangian:1 center:1 go:3 straightforward:2 williams:2 minq:1 starting:1 simplicity:2 rule:4 adjusts:1 importantly:1 deriving:1 dw:4 stability:1 fx:1 justification:1 analogous:1 qq:3 updated:1 imagine:1 us:1 rumelhart:3 recognition:4 particularly:1 utilized:1 cut:1 labeled:1 negligibly:1 electrical:1 thousand:2 calculate:2 ormation:1 sect:1 solla:2 ordering:1 daal03:1 vanishes:1 complexity:2 babak:1 trained:15 depend:1 surgeon:17 upon:1 basis:1 chapter:1 derivation:1 train:1 effective:1 sejnowski:3 artificial:1 neighborhood:1 whose:1 quite:1 stanford:3 plausible:1 solve:3 larger:1 otherwise:1 furthennore:2 statistic:2 think:1 farm:2 itself:1 superscript:1 final:1 sequence:1 net:14 relevant:1 degenerate:1 achieve:2 forth:1 description:3 frobenius:1 competition:1 seattle:1 requirement:1 optimum:1 knowledgebased:1 incremental:1 perfect:1 help:1 eq:29 krogh:2 c:1 direction:1 correct:6 vc:1 enable:1 elimination:11 hoi:1 sand:1 crc:1 explains:1 generalization:5 icnn:1 singularity:1 adjusted:1 helping:1 accompanying:1 hold:1 considered:1 ground:1 hall:1 great:1 mapping:1 smallest:3 outperformed:1 infonnation:1 hope:1 mit:1 rather:2 rosenberg:3 conjunction:1 ax:2 l0:1 improvement:1 modelling:1 indicates:1 u21:1 contrast:2 criticism:1 stopping:1 eliminate:4 typically:4 hidden:10 relation:1 special:2 extraction:1 having:1 eliminated:3 weighl:1 washington:1 represents:1 park:1 thinking:1 report:1 simplify:1 few:1 employ:3 simultaneously:1 recalculates:1 ab:4 freedom:2 friedman:1 investigate:1 nonnally:1 yielding:1 xrk:1 neglecting:1 necessary:2 perfonnance:3 taylor:1 desired:3 minimal:1 delete:5 instance:2 column:1 u23:8 cost:1 undetermined:1 successful:1 too:5 thickness:1 aw:5 thanks:1 oolo:1 stay:1 off:1 moody:1 again:2 central:1 containing:1 inject:1 derivative:17 leading:2 li:1 subsumes:1 includes:1 vi:1 view:1 reached:1 competitive:1 sort:1 parallel:3 minimize:3 square:1 greg:1 xor:13 accuracy:3 who:1 kaufmann:2 yield:4 saliency:2 generalize:1 bpwd:4 lli:1 none:1 rx:1 submitted:1 reach:1 touretzky:2 sharing:1 ed:3 definition:1 u22:5 associated:2 stop:1 ask:1 logical:2 actually:1 back:2 appears:1 wesley:1 tum:1 higher:2 response:4 jw:3 done:2 though:1 strongly:2 just:3 hpl:1 jerome:1 nonlinear:1 propagation:2 believe:1 effect:1 multiplier:1 true:1 counterpart:1 regularization:1 hence:3 leibler:1 nettalk:2 white:1 assistance:1 during:1 ll:1 criterion:4 hill:1 began:1 functional:4 stork:11 detennining:1 insensitive:1 million:1 discussed:1 slight:1 interpretation:1 theirs:1 refer:1 significant:4 mellon:1 ai:2 encouragement:1 hp:1 teaching:1 etal:1 nonlinearity:1 had:2 surface:4 soua:1 etc:1 owt:1 add:1 dominant:1 curvature:1 certain:1 tenns:1 scoring:1 seen:1 minimum:14 morgan:2 sol1a:1 cii:1 prune:3 employed:1 v3:5 shortest:1 signal:2 ii:1 dashed:2 multiple:4 desirable:1 full:1 af:3 calculation:3 long:1 olk:1 cross:1 equally:1 calculates:1 involving:2 basic:1 regression:1 ae:1 cmu:1 df:5 represent:1 achieved:2 justified:2 whereas:4 addition:1 want:1 crucial:1 w2:1 eliminates:1 unlike:1 call:3 structural:1 near:1 enough:1 variety:1 opposite:1 reduce:3 idea:2 whether:1 penalty:1 speech:1 hessian:23 proceed:1 nine:2 deep:1 unimportant:1 hardware:1 mcclelland:1 reduced:1 df1:1 inhibitory:2 towell:2 milo:1 carnegie:1 shall:3 affected:1 key:1 terminology:1 rissanen:2 deletes:8 deleted:6 neither:2 penalizing:2 ork:1 inverse:8 throughout:1 ob:39 decision:1 seber:2 comparable:1 layer:1 simplification:1 strength:1 ijcnn:1 constraint:2 rulebased:1 uii:1 precisely:1 speed:1 pruned:4 optical:1 department:2 according:1 poor:2 hertz:2 smaller:1 pan:1 character:1 cun:7 projecting:1 taken:1 computationally:2 equation:3 describing:1 needed:1 addison:1 available:1 generalizes:1 permit:3 denker:4 obey:1 ho:1 original:2 assumes:3 remaining:2 denotes:1 cf:5 newton:1 calculating:3 neglect:1 giving:1 restrictive:1 especially:4 bl:1 question:1 already:2 occurs:1 improvment:1 damage:11 diagonal:12 ow:7 gradient:6 kth:1 thrun:5 capacity:1 length:2 index:1 illustration:1 minimizing:1 ricoh:3 unfortunately:1 boltzmann:1 neuron:1 benchmark:2 descent:3 hinton:2 arbitrary:1 david:1 inverting:1 cast:1 required:1 extensive:1 connection:2 california:1 plague:1 address:1 able:1 below:4 pattern:6 including:1 deleting:2 difficulty:1 regularized:1 recursion:2 representing:1 improve:1 lk:1 kj:1 text:1 literature:1 l2:2 acknowledgement:1 relative:1 afosr:1 fully:2 degree:2 sufficient:1 consistent:1 unpruned:2 row:1 excitatory:1 changed:2 obd:11 supported:1 transpose:1 english:1 side:1 bias:3 allow:1 shavlik:2 explaining:1 wide:1 benefit:1 slice:1 overcome:1 dimension:3 calculated:2 valid:2 world:1 distributed:1 author:2 jump:1 san:1 far:2 excess:1 pruning:29 lippmann:1 ignore:1 kullback:1 keep:2 ml:1 global:1 sequentially:2 assumed:1 dermed:1 francisco:1 continuous:1 uji:2 why:3 table:2 learn:3 reasonably:1 transfer:1 ca:2 menlo:1 investigated:3 complex:2 vj:2 did:1 noise:4 xu:2 fig:8 retrain:1 definiteness:1 slow:1 wiley:1 hassibi:10 lq:2 xl:1 lie:1 candidate:2 third:1 late:2 removing:1 formula:1 decay:5 naively:1 intractable:1 adding:1 sequential:1 magnitude:20 illustrates:1 demand:1 surprise:1 entropy:1 simply:1 lagrange:1 expressed:1 adjustment:1 scalar:1 corresponds:1 goal:3 kailath:3 replace:1 fisher:2 change:3 reducing:1 principal:1 wolff:6 pas:1 gauss:1 meaningful:1 wq:7 internal:1 stressed:1 kung:2 preparation:1 avoiding:1 ex:1 |
6,047 | 6,470 | Efficient Neural Codes under Metabolic Constraints
Zhuo Wang ??
Department of Mathematics
University of Pennsylvania
wangzhuo@nyu.edu
Xue-Xin Wei ??
Department of Psychology
University of Pennsylvania
weixxpku@gmail.com
Alan A. Stocker
Department of Psychology
University of Pennsylvania
astocker@sas.upenn.edu
Daniel D. Lee
Department of Electrical and System Engineering
University of Pennsylvania
ddlee@seas.upenn.edu
Abstract
Neural codes are inevitably shaped by various kinds of biological constraints, e.g.
noise and metabolic cost. Here we formulate a coding framework which explicitly
deals with noise and the metabolic costs associated with the neural representation of
information, and analytically derive the optimal neural code for monotonic response
functions and arbitrary stimulus distributions. For a single neuron, the theory
predicts a family of optimal response functions depending on the metabolic budget
and noise characteristics. Interestingly, the well-known histogram equalization
solution can be viewed as a special case when metabolic resources are unlimited.
For a pair of neurons, our theory suggests that under more severe metabolic
constraints, ON-OFF coding is an increasingly more efficient coding scheme
compared to ON-ON or OFF-OFF. The advantage could be as large as one-fold,
substantially larger than the previous estimation. Some of these predictions could
be generalized to the case of large neural populations. In particular, these analytical
results may provide a theoretical basis for the predominant segregation into ONand OFF-cells in early visual processing areas. Overall, we provide a unified
framework for optimal neural codes with monotonic tuning curves in the brain, and
makes predictions that can be directly tested with physiology experiments.
1
Introduction
The efficient coding hypothesis [1, 2] plays a fundamental role in understanding neural codes,
particularly in early sensory processing. Going beyond the original idea of redundancy reduction by
Horace Barlow [2], efficient coding has become a general conceptual framework for studying optimal
neural coding [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. Efficient coding theory hypothesizes that the
neural code is organized in a way such that maximal information is conveyed about the stimulus
variable. Notably, any formulation of efficient coding necessarily relies on a set of constraints due
to real world limitations imposed on neural systems. For example, neural noise, metabolic energy
budgets, tuning curve characteristics and the size of the neural population all can have impacts on the
quality of the neural code.
Most previous studies have only considered a small subset of these constraints. For example, the
original redundancy reduction argument proposed by Barlow has focused on utilizing the dynamical
?
equal contribution
current affiliation: Center for Neural Science, New York University
?
current affiliation: Department of Statistics and Center for Theoretical Neuroscience, Columbia University
?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
range of the neurons efficiently [2, 15], but did not take neural noise model and energy consumption
into consideration. Some studies explicitly dealt with the metabolic costs of the system but did not
consider the constraints imposed by the limited firing rates of neurons as well as their detailed tuning
properties [16, 7, 17, 18]. As another prominent example, histogram equalization has been proposed
as the mechanism for determining the optimal tuning curve of a single neuron with monotonic
response characteristics [19]. However, this result only holds for a specific neural noise model and
does not take metabolic costs into consideration either. In terms of neural population, most previous
studies have focused on bell-shaped tuning curves. Optimal neural coding for neural population with
monotonic tuning curves have received much less attention [20, 21].
We develop a formulation of efficient coding that explicitly deals with multiple biologically relevant
constraints, including neural noise, limited range of the neural output, and metabolic consumption.
With this formulation, we can study neural codes based on monotonic response characteristics that
have been frequently observed in biological neural systems. We are able to derive analytical solutions
for a wide range of conditions in the small noise limit. We present results for neural populations of
different sizes, including the cases of a single neuron, pairs of neurons, as well as a brief treatment for
larger neural populations. The results are in general agreements with observed coding schemes for
monotonic tuning curves. The results also provide various quantitative predictions which are readily
testable with targeted physiology experiments.
2
2.1
Optimal Code for a Single Neuron
Models and Methods
We start with the simple case where a scalar stimulus s with prior p(s) is encoded by a single neuron.
To model the neural response for a stimulus s, we first denote the mean output level as a deterministic
function h(s). Here h(s) could denote the mean firing rate in the context of rate coding or just the
mean membrane potential. In either case, the actual response r is noisy and can be modeled by a
probabilistic model P (r|h(s)). Throughout the paper, we limit the neural codes to be monotonic
functions h(s). The mutual information between the input stimulus r and the neural response is
denoted as MI(s, r).
We formulate the efficient coding problem as the maximization of the mutual information between the
stimulus and the response, e.g., MI(s, r) [3]. To complete the formulation of this problem, it is crucial
to choose a set of constraints which characterizes the limited resource available to the neural system.
One important constraint is the finite range of the neural output [19]. Another constraint is on the
mean metabolic cost [16, 7, 17, 18], which limits the mean activity level of neural output, averaged
over the stimulus prior. Under these constraints, the efficient coding problem can mathematically be
formulated as following:
maximize MI(s, r)
subject to 0 ? h(s) ? rmax , h0 (s) ? 0
Es [K(h(s))] ? Ktotal
(range constraint)
(metabolic constraint)
We seek the optimal response function h(s) under various choices of the neural noise model P (r|h(s))
and certain metabolic cost function K(h(s)), as discussed below.
Neural Noise Models: Neural noise can often be well characterized by a Poisson distribution at
relatively short time scale [22]. Under the Poisson noise model, the number of spikes NT over a
duration of T is a Poisson random variable with mean h(s)T and variance h(s)T . In the long T limit,
the mean response r = NT /T approximately follows a Gaussian distribution
r ? N (h(s), h(s)/T )
(1)
Non-Poisson noise have also been observed physiologically. In these cases, the variance of response
NT can be greater or smaller than the mean firing rate [22, 23, 24, 25]. We thus consider a more
generic family of noise models parametrized by ?
r ? N (h(s), h(s)? /T )
(2)
This generalized family of noise model naturally includes the additive Gaussian noise case (when
? = 0), which is useful for describing the stochasticity of the membrane potential of a neuron.
2
Metabolic Cost: We model the metabolic cost K is a power-law function of the neural output
K(h(s)) = h(s)?
(3)
where ? > 0 is a parameter to model how does the energy cost scale up as the neural output is
increasing. For a single neuron we will demonstrate with the general energy cost function but when
we generalize to the case of multiple neurons, we will assume ? = 1 for simplicity. Note that
it does not require extra effort to solve the problem if the cost function takes the general form of
?
K(h(s))
= K0 + K1 h(s)? , as reported in [26]. This is because of the linear nature of the expectation
term in the metabolic constraint.
2.2
Derivation of the Optimal h(s)
This efficient coding problem can be greatly simplified due to the fact that it is invariant under any
re-parameterization of the stimulus variable s. We take this advantage by mapping s to another
uniform random variable u ? [0, 1] via the cumulative distribution function u = F (s) [27]. If we
choose g(u) = g(F (s)) = h(s), it suffices to solve the following new problem which optimizes g(u)
for a re-parameterized input u with uniform prior
maximize MI(u, r)
subject to 0 ? g(u) ? rmax , g 0 (u) ? 0
Eu [K(g(u))] ? Ktotal
Once the optimal form of g? (u) is obtained, the optimal h? (s) is naturally given by g? (F (s)). To
solve this simplified problem, first we express the objective function in terms of g(u). In the small
noise limit (large integration time T ), the Fisher information IF (u) of the neuron with noise model
in Eq. (2) is calculated and the mutual information can be approximated as (see [28, 14])
g 0 (u)2
+ O(1)
g(u)?
Z
Z
1
1 1
1
g 0 (u)2
MI(u, r) = H(U ) +
p(u) log IF (u) du =
du + log T + O(1/T )
log
2
2 0
g(u)?
2
IF (u) = T
(4)
(5)
where H(U ) = 0 is the entropy and p(u) = 1{0?u?1} is the density of the uniform distribution.
Furthermore, each constraints can be rewritten as integrals of g 0 (u) and g(u) respectively:
Z 1
g(1) ? g(0) =
g 0 (u) du ? rmax
(6)
Z
0
1
g(u)? du ? Ktotal
Eu [K(g(u))] =
(7)
0
This form of the problem (Eq. 5-7) can be analytically solved by using the Lagrangian multiplier
method and the optimal response function must take the form
1/?
1 ?1
g(u) = rmax ?
?q (u?q (a))
, h(s) = g(F (s))
(8)
a
Z x
def
def
where q = (1 ? ?/2)/?, ?q (x) =
z q?1 exp(?z) dz.
(9)
0
The function ?q (x) is called the incomplete gamma function and ?q?1 is its inverse. Due to space
limitation we only present a sketch derivation. Readers who are interested in the detailed proof are
referred to the supplementary materials.
Let us now turn to some intuitive conclusions behind this solution (also see Fig.1, in which we
have assumed rmax = 1 for simplicity). From Eq. (8), it is clear that the optimal solution g(u)
depend on the constant a which should be determined by equalizing the metabolic constraint (see the
horizontal dash lines in Fig.1a). Furthermore, the optimal solution h(s) depends on the specific input
distribution p(s). Depending on the relative magnitude of rmax and Ktotal :
3
? Range constraint dominates: This is the case when there is more than sufficient energy to
achieve the optimal solution so that the metabolic constraint becomes completely redundant.
Determined by ?, ? and rmax , Kthre is the energy consumption of the optimal code with
unconstrained metabolic budget. When the available metabolic cost exceeds this threshold
Ktotal ? Kthre , the constant a is very close to zero and the optimal g(u) is proportional to a
power function g(u) = rmax ? u1/q . See red curves in Fig.1.
? Both constraints: This is the general case when Ktotal . Kthre . The constant a is set to the
minimum value for which the metabolic constraint is satisfied. See purple curves in Fig.1.
? Metabolic constraint dominates: This happens when Ktotal Kthre . In this case a is
often very large. See blue curves in Fig.1.
Gaussian prior
0.5
0.5
0
1
h(s)
0.5
?2
u
f
g
p(u)
g(u)
x
0.5
0
0
0.5
0.5
h
p(s)
2
4
2
4
s
p(s)
1
0.5
0
1
u
p(s)
1
0
0
2
1
h(s)
1
0
s
h(s)
e
heavy-tail prior
d
p(s)
1
0
0
x
c
p(u)
1
h(s)
b
low cost
high cost
max cost
g(u)
?q?1 (x)
a
?q?1 (x)
Poisson noise Gaussian noise
uniform prior
?2
0
s
2
0.5
0
0
s
Figure 1: Deriving optimal tuning curves g(u) and corresponding h(s) for different prior distributions
and different noise models. Top row: constant Gaussian noise (?, ?, q) = (0, 1, 1); Bottom row:
Poisson noise (?, ?, q) = (1, 1, 1/2). (a) A segment of the inverse incomplete gamma function
is cropped out by dashed boxes. The higher the horizontal dash lines (constant a), the lower the
average metabolic cost, which corresponds to a more substantial metabolic constraint. We thus
use ?low",?high" and ?max" to label the energy costs under different metabolic constraints. (b) The
optimal solution g(u) for a uniform variable u. (c) The corresponding optimal h(s) for Gaussian prior.
(d) The corresponding optimal h(s) for Gamma distribution p(s) ? sq?1 exp(?s). Specifically for
this prior, the optimal tuning curve is exactly linear without maximum response constraint. (e-h)
Similar to (a-d), but for Poisson noise.
2.3
Properties of the Optimal h(s)
We have predicted the optimal response function for arbitrary values of ? (which corresponds to the
noise model) and ? (which quantifies the metabolic cost model). Here we specifically focus on a few
situations with most biological relevance.
We begin with the simple additive Gaussian noise model, i.e. ? = 0. This model could provide a
good characterization of the response mapping from the input stimulus to the membrane potential
of a neuron [19]. With more than sufficient metabolic supply, the optimal solution falls back to the
histogram equalization principle where each response magnitude is utilized to the same extent (red
curve in Fig. 1b and Fig.2a). With less metabolic budget, the optimal tuning curve bends downwards
to satisfy this constraint and large responses will be penalized, resulting in more density at smaller
response magnitude (purple curve in Fig.2a). In the other extreme, when the available metabolic
budget Ktotal is diminishing, the response magnitude converges to the max-entropy distribution under
the metabolic constraint E[g(u)? ] = const (blue curve in Fig.2a).
Next we discuss the case of Poisson spiking neurons. In the extreme case when the range constraint
dominates, the model predicts a square tuning curve for uniform input (red curve in Fig.1f), which is
consistent with previous studies [29, 30]. We also found that Poisson noise model leads to heavier
4
penalization on large response magnitude compared to Gaussian noise, suggesting an interaction
between noise and metabolic cost in shaping the optimal neural response distribution. In the other
extreme when Ktotal goes to 0, the response distribution converges to a gamma distribution, with heavy
tail (see Fig.2). Our analytical result gives a simple yet quantitative explanation of the emergence of
sparse coding [7] from an energy-efficiency perspective.
Gaussian noise
a
b
0.5
probability
probability
low cost
high cost
max cost
0
Poisson noise
1
response magnitude
0
0.5
1
response magnitude
Figure 2: Probability of generating certain response g(u) based on the optimal tuning of a single
neuron under (a) Gaussian noise model and (b) Poisson noise model. In the extreme case of Gaussian
noise with effectively no metabolic constraint, the distribution is uniformly distributed on the whole
range.
3
Optimal Code for a Pair of Neurons
We next study the optimal coding in the case of two neurons with monotonic response functions. We
denote the neural responses as r = (r1 , r2 ). Therefore the efficient coding problem becomes:
maximize MI(s, r)
subject to 0 ? hi (s) ? rmax , i = 1, 2.
Es [K(h1 (s)) + K(h2 (s))] ? 2Ktotal
(range constraint)
(metabolic constraint)
Assuming the neural noise is independent across neurons, the system of two neurons has total Fisher
information just as the linear sum of Fisher information contributed from each neuron IF (s) =
I1 (s) + I2 (s).
3.1
Optimal response functions
Previous studies on neural coding with monotonic response functions have typically assumed that each
hi (s) has sigmoidal shape. It is important to emphasize that we do not make any a priori assumptions
on the detailed shape of the tuning curve other than being monotonic and smooth. We define each
?
+
?
0
0
neuron?s active region Ai = A+
i ? Ai , where Ai = {s|hi (s) > 0}, Ai = {s| ? hi (s) > 0}. Due
+
?
to the monotonicity of tuning curve, either Ai or Ai has to be empty.
We find the following results (proof in the supplementary materials)
1. Different neurons should have non-overlapping active regions.
2. If the metabolic constraint is binding, ON-OFF coding is better than ON-ON coding or OFFOFF coding. Otherwise all three coding schemes can achieve the same mutual information.
3. For ON-OFF coding, it is better to have ON regions on the right side.
4. For ON-ON coding (or OFF-OFF), each neuron should have roughly the same tuning curve
hi (s) ? hj (s) while still have disjoint active regions. Note that a conceptually similar
coding scheme has been previously discussed by [29]. Within the ON-pool or OFF-pool,
the optimal tuning curve is same as the optimal solution from the single neuron case.
In Fig.3a-d, we illustrate how these conclusions can be used to determine the optimal pair of neurons,
assuming additive Gaussian noise ? = 0 and linear metabolic cost ? = 1 (for other ? and ? the
process is similar). Our analytical results allow us to predict the precise shape of the optimal response
functions, which goes beyond previous work on ON-OFF coding schemes [13, 31].
5
3.2
Comparison between ON-OFF and ON-ON codes
We aim to compare the coding performance of ON-OFF and ON-ON codes. In Fig.3e we show how
the mutual information depends on the available metabolic budget. For both ON-FF and ON-ON
scheme, the mutual information is monotonically increasing as a function of energy available. We
compare these two curves in two different ways. First, we notice that both mutual information curve
saturate the limit at KON-ON = 0.5rmax and KON-OFF = 0.25rmax respectively (see the red tuning
curves in Fig.3a-d). Note that this specific saturation limit is only valid for ? = 0 and ? = 1. For
any other mutual information, we find out that the optimal ON-ON pair (or OFF-OFF pair) always
cost twice energy compared to the optimal ON-OFF pair. Second, one can compare the ON-ON and
ON-OFF scheme by fixing the energy available. The optimal mutual information achieved by ON-ON
neurons is always smaller than that achieved by ON-OFF neurons and the difference is plotted in
Fig.3. When the available energy is extremely limited Ktotal rmax , such difference saturates at ?1
in the logarithm space of MI (base 2). This shows that, in the worst scenario, the ON-ON code is only
half as efficient as the ON-OFF code from mutual information perspective. In other words, it would
take twice the amount of time T for the ON-ON code to convey same amount of mutual information
as the ON-OFF code under same noise level.
These analyses quantitatively characterize the advantage of ON-OFF over ON-ON and show how it
varies when the relative importance of the metabolic constraint changes. The encoding efficiency of
ON-OFF ranges from double (with very limited metabolic budget) to equal amount of the ON-ON
efficiency (with unlimited metabolic budget). This wide range includes the previous conclusion
reported by Gjorgjieva et.al., where a mild advantage (? 15%) of ON-OFF scheme is found under
short integration time limit [31]. It is well known that the split of ON and OFF pathways exists in
0.5
c
0.5
u
0
?4
?2
0
s
2
4
p(s)
0
?1
?2
h(s)
1
0.5
0
0
low cost, ON
low cost, OFF
max cost, ON
max cost, OFF
d
p(u)
e
p(s)
1
0.5
1
1
g(u)
ON-ON
b
1
0
0
heavy-tail prior
h(s)
g(u)
ON-OFF
a
p(u)
log2 (MI)
uniform prior
0.5
u
1
low cost, ON?1
low cost, ON?2
max cost, ON?1
max cost, ON?2
0.5
0
?4
?2
0
s
2
ON?ON
ON?OFF
difference
?3
0
0.25
0.5
Ktotal/rmax
4
Figure 3: The optimal response functions for a pair of neurons assuming Gaussian noise. (a) The
optimal response functions for a uniform input distribution assuming ON-OFF coding scheme. Solid
red curve and dash red curve represent the optimal response functions for a pair of neurons with no
metabolic constraint (?max cost"). Solid blue and dash blue curves are the optimal response functions
with substantial metabolic constraint (?low cost"). (b) Similar to panel a, but for input stimuli with
heavy tail distribution. (c) The optimal response functions for a uniform input distribution assuming
ON-ON coding scheme. Solid and dash red curves are for no metabolic constraint. Notice that two
curves appear to be identical but are actually different at finer scales (see the inserted panel). Solid
and dash blue are for substantial metabolic constraint. (d) Similar to panel c, but for input stimuli
with heavy tail distribution. (e) A comparison between the ON-ON and ON-OFF schemes. The
x-axis represents how substantial the metabolic constraint is ? any value greater than the threshold
0.5 implies no metabolic constraint in effect. The y-axis represents the mutual information, relative
to the maximal achievable mutual information without metabolic constraints (which is the same
for ON-ON and ON-OFF schemes). The green dash line represents the difference between the
information transmitted by the two schemes. Negative difference indicates an advantage of ON-OFF
over ON-ON.
6
the retina of many species [32, 33]. The substantial increase of efficiency under strong metabolic
constraint we discovered supports the argument that metabolic constraint may be one of the main
reasons for such pathway splitting in evolution.
In a recent study by Karklin and Simoncelli [13], it is observed numerically that ON-OFF coding
scheme can naturally emerge when a linear-nonlinear population of neurons are trained to maximize
mutual information with image input and under metabolic constraint. It is tempting to speculate a
generic connection of these numerical observations to our theoretical results, although our model is
much more simplified in the sense that we do not directly model the higher dimensional stimulus
(natural image) but just a one dimensional projection (local contrast). Intriguingly, we find that if the
inputs follow certain heavy tail distribution ( Fig.3b), the optimal response functions are two rectified
non-linear functions which split the encoding range. Such rectified non-linearity is consistent with
both the non-linearity observed physiologically[34] and the numerical results in [13] .
4
Discussion
In this paper we presented a theoretical framework for studying optimal neural codes under biologically relevant constraints. Compared to previous works, we emphasize the importance of two types of
constraint ? the noise characteristics of the neural responses and the metabolic cost. Throughout the
paper, we have focused on neural codes with smooth monotonic response functions. We demonstrated
that, maybe surprisingly, analytical solutions exist for a wide family of noise characteristics and
metabolic cost functions. These analytical results rely on the techniques of approximating mutual
information using Fisher information. There are cases when such approximation would bread down,
in particular for short integration time or non-Gaussian noise. For a more detailed discussion on the
validity of Fisher approximation, see [29, 14, 35].
We have focused on the cases of a single neuron and a pair of neurons. However, the framework
can be generalized to the case of larger population of neurons. For the case of N = 2k (k is large)
neurons, we again find the corresponding optimization problem could be solved analytically by
exploiting the Fisher information approximation of mutual information [28, 14]. Interestingly, we
found the optimal codes should be divided into two pools of neurons of equal size k. One pool
of neuron with monotonic increasing response function (ON-pool), and the other with monotonic
decreasing response function (OFF-pool). For neurons within the same pool, the optimal response
functions appear to be identical on the macro-scale but are quite different when zoomed in. In fast,
the optimal code must have disjoint active regions for each neuron. This is similar to what has been
illustrated in the inset panel of Fig.3c, where two seemingly identical tuning curves for ON-neurons
are compared. We can also quantify the increase of the mutual information by using optimal coding
schemes versus using all ON neurons (or all OFF). Interestingly, some of the key results presented in
the Fig 3e for the a pair of neurons generalize to 2K case. When N = 2k + 1, the optimal solution is
similar to N = 2k for a large pool of neurons. However, when k is small, the difference caused by
asymmetry between ON/OFF pools can substantially change the configuration of the optimal code.
Due to the limited scope of the paper, we have ignored several important aspects when formulating
the efficient coding problem. First, we have not modeled the spontaneous activity (baseline firing rate)
of neurons. Second, we have not considered the noise correlations between the responses of neurons.
Third, we have ignored the noise in the input to the neurons. We think that the first two factors
are unlikely to change our main results. However, incorporating the input noise may significantly
change the results. In particular, for the cases of multiple neurons, our current results predict that
there is no overlap between the active regions of the response functions for ON and OFF neurons.
However, it is possible that this prediction does not hold in the presence of the input noise. In that
case, it might be beneficial to have some redundancy by making the response functions partially
overlap. Including these factors into the framework should facilitate a detailed and quantitative
comparison to physiologically measured data in the future. As for the objective function, we have
only considered the case of maximizing mutual information; it is interesting to see whether the results
can be generalized to other objective functions such as, e.g., minimizing decoding error[36, 37]. Also,
our theory is based on a one dimensional input. To fully explain the ON-OFF split in visual pathway,
it seems necessary to consider a more complete model with the images as the input. To this end, our
current model lacks the spatial component, and it doesn?t explain the difference between the number
of ON and OFF neurons in retina [38]. Nonetheless, the insight from these analytical results based
on the simple model may prove to be useful for a more complete understanding of the functional
7
organization of the early visual pathway. Last but not least, we have assumed a stationary input
distribution. However, in the natural environment the input distribution often fluctuate at different
time scales, it remains to be investigated how to incorporate these dynamical aspects into a theory of
efficient coding.
References
[1] Fred Attneave. Some informational aspects of visual perception. Psychological review,
61(3):183, 1954.
[2] Horace B Barlow. Possible principles underlying the transformation of sensory messages.
Sensory communication, pages 217?234, 1961.
[3] Ralph Linsker. Self-organization in a perceptual network. Computer, 21(3):105?117, 1988.
[4] Joseph J Atick and A Norman Redlich. Towards a theory of early visual processing. Neural
Computation, 2(3):308?320, 1990.
[5] Joseph J Atick. Could information theory provide an ecological theory of sensory processing?
Network: Computation in neural systems, 3(2):213?251, 1992.
[6] F Rieke, DA Bodnar, and W Bialek. Naturalistic stimuli increase the rate and efficiency of
information transmission by primary auditory afferents. Proceedings of the Royal Society of
London. Series B: Biological Sciences, 262(1365):259?265, 1995.
[7] Bruno Olshausen and David Field. Emergence of simple-cell receptive field properties by
learning a sparse code for natural images. Nature, 381:607?609, 1996.
[8] Anthony J Bell and Terrence J Sejnowski. The ?independent components" of natural scenes are
edge filters. Vision research, 37(23):3327?3338, 1997.
[9] Eero P Simoncelli and Bruno A Olshausen. Natural image statistics and neural representation.
Annual review of neuroscience, 24(1):1193?1216, 2001.
[10] Allan Gottschalk. Derivation of the visual contrast response function by maximizing information
rate. Neural computation, 14(3):527?542, 2002.
[11] Nicol S Harper and David McAlpine. Optimal neural population coding of an auditory spatial
cue. Nature, 430(7000):682?686, 2004.
[12] Mark D McDonnell and Nigel G Stocks. Maximally informative stimuli and tuning curves for
sigmoidal rate-coding neurons and populations. Physical review letters, 101(5):058103, 2008.
[13] Yan Karklin and Eero P Simoncelli. Efficient coding of natural images with a population of
noisy linear-nonlinear neurons. Advances in neural information processing systems, 24:999,
2011.
[14] Xue-Xin Wei and Alan A Stocker. Mutual information, fisher information, and efficient coding.
Neural computation, 2016.
[15] Horace Barlow. Redundancy reduction revisited. Network: computation in neural systems,
12(3):241?253, 2001.
[16] William B Levy and Robert A Baxter. Energy efficient neural codes. Neural computation,
8(3):531?543, 1996.
[17] Simon B Laughlin, Rob R de Ruyter van Steveninck, and John C Anderson. The metabolic cost
of neural information. Nature neuroscience, 1(1):36?41, 1998.
[18] Vijay Balasubramanian, Don Kimber, and Michael J Berry II. Metabolically efficient information processing. Neural Computation, 13(4):799?815, 2001.
[19] Simon B Laughlin. A simple coding procedure enhances a neuron?s information capacity. Z.
Naturforsch, 36(910-912):51, 1981.
8
[20] Deep Ganguli and Eero P Simoncelli. Efficient sensory encoding and Bayesian inference with
heterogeneous neural populations. Neural Computation, 26(10):2103?2134, 2014.
[21] David B Kastner, Stephen A Baccus, and Tatyana O Sharpee. Critical and maximally informative
encoding between neural populations in the retina. Proceedings of the National Academy of
Sciences, 112(8):2533?2538, 2015.
[22] George J Tomko and Donald R Crapper. Neuronal variability: non-stationary responses to
identical visual stimuli. Brain research, 79(3):405?418, 1974.
[23] DJ Tolhurst, JA Movshon, and ID Thompson. The dependence of response amplitude and
variance of cat visual cortical neurones on stimulus contrast. Experimental brain research,
41(3-4):414?419, 1981.
[24] Mark M Churchland et al. Stimulus onset quenches neural variability: a widespread cortical
phenomenon. Nature neuroscience, 13(3):369?378, 2010.
[25] Moshe Gur and D Max Snodderly. High response reliability of neurons in primary visual cortex
(v1) of alert, trained monkeys. Cerebral cortex, 16(6):888?895, 2006.
[26] David Attwell and Simon B Laughlin. An energy budget for signaling in the grey matter of the
brain. Journal of Cerebral Blood Flow & Metabolism, 21(10):1133?1145, 2001.
[27] Xue-Xin Wei and Alan A Stocker. A bayesian observer model constrained by efficient coding
can explain?anti-bayesian?percepts. Nature Neuroscience, 2015.
[28] Nicolas Brunel and Jean-Pierre Nadal. Mutual information, Fisher information, and population
coding. Neural Computation, 10(7):1731?1757, 1998.
[29] Matthias Bethge, David Rotermund, and Klaus Pawelzik. Optimal short-term population coding:
when fisher information fails. Neural Computation, 14(10):2317?2351, 2002.
[30] Don H Johnson and Will Ray. Optimal stimulus coding by neural populations using rate codes.
Journal of computational neuroscience, 16(2):129?138, 2004.
[31] Julijana Gjorgjieva, Haim Sompolinsky, and Markus Meister. Benefits of pathway splitting in
sensory coding. The Journal of Neuroscience, 34(36):12127?12144, 2014.
[32] Peter H Schiller. The on and off channels of the visual system. Trends in neurosciences,
15(3):86?92, 1992.
[33] Heinz W?ssle. Parallel processing in the mammalian retina. Nature Reviews Neuroscience,
5(10):747?757, 2004.
[34] Matteo Carandini. Amplification of trial-to-trial response variability by neurons in visual cortex.
PLoS Biol, 2(9):e264, 2004.
[35] Zhuo Wang, Alan A Stocker, and Daniel D Lee. Efficient neural codes that minimize lp
reconstruction error. Neural Computation, 2016.
[36] Tvd Twer and Donald IA MacLeod. Optimal nonlinear codes for the perception of natural
colours. Network: Computation in Neural Systems, 12(3):395?407, 2001.
[37] Zhuo Wang, Alan A Stocker, and Daniel D Lee. Optimal neural tuning curves for arbitrary
stimulus distributions: Discrimax, infomax and minimum Lp loss. In Advances in Neural
Information Processing Systems NIPS, pages 2177?2185, 2012.
[38] Charles P Ratliff, Bart G Borghuis, Yen-Hong Kao, Peter Sterling, and Vijay Balasubramanian.
Retina is structured to process an excess of darkness in natural scenes. Proceedings of the
National Academy of Sciences, 107(40):17368?17373, 2010.
9
| 6470 |@word mild:1 trial:2 achievable:1 seems:1 grey:1 seek:1 solid:4 reduction:3 configuration:1 series:1 daniel:3 interestingly:3 current:4 com:1 nt:3 gmail:1 yet:1 must:2 readily:1 john:1 additive:3 numerical:2 informative:2 shape:3 bart:1 stationary:2 half:1 cue:1 metabolism:1 parameterization:1 short:4 characterization:1 tolhurst:1 revisited:1 sigmoidal:2 alert:1 become:1 supply:1 prove:1 pathway:5 ray:1 notably:1 allan:1 upenn:2 twer:1 roughly:1 frequently:1 brain:4 heinz:1 informational:1 decreasing:1 balasubramanian:2 actual:1 pawelzik:1 increasing:3 becomes:2 spain:1 begin:1 linearity:2 underlying:1 panel:4 what:1 kind:1 rmax:13 substantially:2 monkey:1 nadal:1 unified:1 transformation:1 quantitative:3 exactly:1 appear:2 engineering:1 local:1 limit:8 encoding:4 id:1 firing:4 matteo:1 approximately:1 might:1 twice:2 suggests:1 limited:6 range:12 averaged:1 steveninck:1 sq:1 procedure:1 signaling:1 area:1 yan:1 bell:2 physiology:2 significantly:1 projection:1 word:1 donald:2 naturalistic:1 close:1 bend:1 context:1 equalization:3 darkness:1 imposed:2 deterministic:1 center:2 lagrangian:1 dz:1 go:2 attention:1 demonstrated:1 duration:1 maximizing:2 focused:4 formulate:2 thompson:1 simplicity:2 splitting:2 insight:1 utilizing:1 crapper:1 deriving:1 gur:1 population:16 rieke:1 spontaneous:1 play:1 snodderly:1 astocker:1 hypothesis:1 agreement:1 trend:1 approximated:1 particularly:1 utilized:1 mammalian:1 predicts:2 observed:5 role:1 bottom:1 inserted:1 wang:3 electrical:1 solved:2 worst:1 region:6 sompolinsky:1 eu:2 plo:1 substantial:5 environment:1 trained:2 depend:1 segment:1 churchland:1 efficiency:5 basis:1 completely:1 k0:1 stock:1 various:3 cat:1 derivation:3 fast:1 gjorgjieva:2 london:1 sejnowski:1 klaus:1 h0:1 quite:1 encoded:1 larger:3 solve:3 supplementary:2 jean:1 tested:1 otherwise:1 statistic:2 think:1 emergence:2 noisy:2 seemingly:1 advantage:5 equalizing:1 analytical:7 matthias:1 reconstruction:1 interaction:1 maximal:2 zoomed:1 macro:1 relevant:2 achieve:2 academy:2 amplification:1 intuitive:1 kao:1 exploiting:1 empty:1 double:1 r1:1 sea:1 asymmetry:1 generating:1 transmission:1 converges:2 derive:2 depending:2 develop:1 fixing:1 illustrate:1 measured:1 received:1 sa:1 eq:3 strong:1 predicted:1 implies:1 quantify:1 filter:1 material:2 require:1 ja:1 suffices:1 biological:4 mathematically:1 hold:2 considered:3 exp:2 mapping:2 predict:2 scope:1 early:4 estimation:1 label:1 gaussian:14 always:2 aim:1 hj:1 fluctuate:1 focus:1 indicates:1 greatly:1 contrast:3 baseline:1 sense:1 inference:1 ganguli:1 typically:1 unlikely:1 diminishing:1 going:1 interested:1 i1:1 ralph:1 overall:1 denoted:1 priori:1 spatial:2 special:1 integration:3 mutual:20 constrained:1 equal:3 once:1 field:2 shaped:2 intriguingly:1 identical:4 represents:3 kastner:1 linsker:1 future:1 stimulus:19 quantitatively:1 few:1 retina:5 gamma:4 national:2 sterling:1 william:1 organization:2 message:1 severe:1 predominant:1 extreme:4 behind:1 stocker:5 integral:1 edge:1 necessary:1 incomplete:2 logarithm:1 re:2 plotted:1 theoretical:4 psychological:1 maximization:1 cost:37 subset:1 uniform:9 johnson:1 characterize:1 reported:2 nigel:1 varies:1 xue:3 density:2 fundamental:1 lee:3 off:40 probabilistic:1 decoding:1 pool:9 terrence:1 michael:1 bethge:1 infomax:1 again:1 satisfied:1 choose:2 suggesting:1 potential:3 de:1 speculate:1 coding:44 tvd:1 includes:2 matter:1 hypothesizes:1 satisfy:1 caused:1 explicitly:3 depends:2 afferent:1 onset:1 h1:1 observer:1 characterizes:1 red:7 start:1 parallel:1 simon:3 yen:1 contribution:1 purple:2 square:1 minimize:1 variance:3 characteristic:6 efficiently:1 who:1 percept:1 conceptually:1 dealt:1 generalize:2 bayesian:3 rectified:2 finer:1 explain:3 energy:14 nonetheless:1 attneave:1 naturally:3 associated:1 mi:8 proof:2 auditory:2 treatment:1 carandini:1 organized:1 shaping:1 amplitude:1 actually:1 back:1 higher:2 follow:1 response:49 wei:3 maximally:2 formulation:4 box:1 anderson:1 furthermore:2 just:3 atick:2 correlation:1 sketch:1 horizontal:2 nonlinear:3 overlapping:1 lack:1 widespread:1 quality:1 olshausen:2 facilitate:1 effect:1 validity:1 multiplier:1 barlow:4 norman:1 evolution:1 analytically:3 i2:1 illustrated:1 deal:2 self:1 hong:1 generalized:4 prominent:1 complete:3 demonstrate:1 image:6 consideration:2 charles:1 mcalpine:1 functional:1 spiking:1 physical:1 cerebral:2 discussed:2 tail:6 numerically:1 ai:6 tuning:20 unconstrained:1 mathematics:1 stochasticity:1 bruno:2 dj:1 reliability:1 cortex:3 base:1 recent:1 perspective:2 optimizes:1 scenario:1 certain:3 ecological:1 affiliation:2 wangzhuo:1 transmitted:1 minimum:2 greater:2 george:1 determine:1 maximize:4 redundant:1 monotonically:1 dashed:1 tempting:1 ii:1 multiple:3 simoncelli:4 bread:1 stephen:1 alan:5 exceeds:1 smooth:2 characterized:1 long:1 divided:1 impact:1 prediction:4 heterogeneous:1 vision:1 expectation:1 poisson:11 histogram:3 represent:1 achieved:2 cell:2 cropped:1 crucial:1 extra:1 subject:3 flow:1 presence:1 split:3 baxter:1 psychology:2 pennsylvania:4 idea:1 whether:1 heavier:1 colour:1 effort:1 movshon:1 peter:2 york:1 neurones:1 deep:1 ignored:2 useful:2 detailed:5 clear:1 maybe:1 amount:3 exist:1 notice:2 neuroscience:9 disjoint:2 blue:5 express:1 redundancy:4 key:1 threshold:2 blood:1 v1:1 sum:1 inverse:2 parameterized:1 letter:1 family:4 throughout:2 reader:1 rotermund:1 def:2 hi:5 dash:7 haim:1 fold:1 annual:1 activity:2 constraint:44 scene:2 unlimited:2 markus:1 u1:1 aspect:3 argument:2 extremely:1 formulating:1 relatively:1 department:5 structured:1 mcdonnell:1 membrane:3 smaller:3 across:1 increasingly:1 beneficial:1 lp:2 joseph:2 rob:1 biologically:2 happens:1 making:1 invariant:1 resource:2 segregation:1 previously:1 remains:1 describing:1 turn:1 mechanism:1 discus:1 end:1 studying:2 available:7 meister:1 rewritten:1 generic:2 pierre:1 original:2 top:1 log2:1 const:1 macleod:1 testable:1 k1:1 approximating:1 society:1 objective:3 moshe:1 spike:1 receptive:1 primary:2 dependence:1 bialek:1 enhances:1 schiller:1 capacity:1 parametrized:1 consumption:3 extent:1 reason:1 assuming:5 code:28 modeled:2 minimizing:1 baccus:1 robert:1 negative:1 ratliff:1 contributed:1 neuron:54 observation:1 finite:1 inevitably:1 anti:1 situation:1 saturates:1 communication:1 precise:1 variability:3 discovered:1 arbitrary:3 tatyana:1 david:5 pair:11 metabolically:1 connection:1 barcelona:1 nip:2 zhuo:3 beyond:2 able:1 dynamical:2 below:1 perception:2 saturation:1 including:3 max:10 explanation:1 green:1 royal:1 power:2 overlap:2 critical:1 natural:8 rely:1 ia:1 discrimax:1 karklin:2 scheme:15 brief:1 axis:2 columbia:1 prior:11 understanding:2 review:4 berry:1 nicol:1 determining:1 relative:3 law:1 fully:1 loss:1 interesting:1 limitation:2 proportional:1 versus:1 penalization:1 h2:1 conveyed:1 sufficient:2 consistent:2 principle:2 metabolic:52 heavy:6 row:2 penalized:1 surprisingly:1 last:1 side:1 allow:1 laughlin:3 wide:3 fall:1 emerge:1 sparse:2 benefit:1 distributed:1 van:1 curve:32 calculated:1 cortical:2 world:1 cumulative:1 valid:1 doesn:1 sensory:6 fred:1 kon:2 tomko:1 simplified:3 excess:1 emphasize:2 monotonicity:1 active:5 gottschalk:1 conceptual:1 assumed:3 eero:3 don:2 physiologically:3 quantifies:1 quenches:1 naturforsch:1 nature:7 channel:1 ruyter:1 nicolas:1 du:4 investigated:1 necessarily:1 anthony:1 da:1 did:2 main:2 whole:1 noise:45 convey:1 neuronal:1 fig:18 referred:1 redlich:1 ff:1 downwards:1 ddlee:1 fails:1 perceptual:1 levy:1 third:1 saturate:1 down:1 specific:3 horace:3 inset:1 nyu:1 r2:1 dominates:3 exists:1 incorporating:1 ktotal:12 effectively:1 importance:2 magnitude:7 budget:9 vijay:2 entropy:2 visual:11 partially:1 scalar:1 monotonic:13 binding:1 brunel:1 corresponds:2 relies:1 viewed:1 targeted:1 formulated:1 towards:1 fisher:9 change:4 determined:2 specifically:2 uniformly:1 attwell:1 called:1 total:1 specie:1 e:2 xin:3 experimental:1 sharpee:1 support:1 mark:2 harper:1 relevance:1 phenomenon:1 incorporate:1 bodnar:1 biol:1 |
6,048 | 6,471 | Stochastic Variance Reduction Methods
for Saddle-Point Problems
P. Balamurugan
INRIA - Ecole Normale Sup?rieure, Paris
balamurugan.palaniappan@inria.fr
Francis Bach
INRIA - Ecole Normale Sup?rieure, Paris
francis.bach@ens.fr
Abstract
We consider convex-concave saddle-point problems where the objective functions
may be split in many components, and extend recent stochastic variance reduction
methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which are common in machine learning.
While the algorithmic extension is straightforward, it comes with challenges and
opportunities: (a) the convex minimization analysis does not apply and we use
the notion of monotone operators to prove convergence, showing in particular
that the same algorithm applies to a larger class of problems, such as variational
inequalities, (b) there are two notions of splits, in terms of functions, or in terms of
partial derivatives, (c) the split does need to be done with convex-concave terms,
(d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple
extension of the ?catalyst? framework, leading to an algorithm which is always
superior to accelerated batch algorithms.
1
Introduction
When using optimization in machine learning, leveraging the natural separability of the objective
functions has led to many algorithmic advances; the most common example is the separability as a sum
of individual loss terms corresponding to individual observations, which leads to stochastic gradient
descent techniques. Several lines of work have shown that the plain Robbins-Monro algorithm could
be accelerated for strongly-convex finite sums, e.g., SAG [1], SVRG [2], SAGA [3]. However, these
only apply to separable objective functions.
In order to tackle non-separable losses or regularizers, we consider the saddle-point problem:
min max K(x, y) + M (x, y),
x?Rd y?Rn
(1)
where the functions K and M are ?convex-concave?, that is, convex with respect to the first variable,
and concave with respect to the second variable, with M potentially non-smooth but ?simple? (e.g.,
for which the proximal operator is easy to compute), and K smooth. These problems occur naturally
within convex optimization through Lagrange or Fenchel duality [4]; for example the bilinear saddlepoint problem minx?Rd maxy?Rn f (x)+y > Kx?g(y) corresponds to a supervised learning problem
with design matrix K, a loss function g ? (the Fenchel conjugate of g) and a regularizer f .
We assume that the function K may be split into a potentially large number of components. Many
problems in machine learning exhibit that structure in the saddle-point formulation, but not in the
associated convex minimization and concave maximization problems (see examples in Section 2.1).
Like for convex minimization, gradient-based techniques that are blind to this separable structure
need to access all the components at every iteration. We show that algorithms such as SVRG [2] and
SAGA [3] may be readily extended to the saddle-point problem. While the algorithmic extension is
straightforward, it comes with challenges and opportunities. We make the following contributions:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? We provide the first convergence analysis for these algorithms for saddle-point problems, which
differs significantly from the associated convex minimization set-up. In particular, we use in
Section 6 the interpretation of saddle-point problems as finding the zeros of a monotone operator,
and only use the monotonicity properties to show linear convergence of our algorithms, thus
showing that they extend beyond saddle-point problems, e.g., to variational inequalities [5, 6].
? We show that the saddle-point formulation (a) allows two different notions of splits, in terms
of functions, or in terms of partial derivatives, (b) does need splits into convex-concave terms
(as opposed to convex minimization), and (c) that non-uniform sampling is key to an efficient
algorithm, both in theory and practice (see experiments in Section 7).
? We show in Section 5 that these incremental algorithms can be easily accelerated using a simple
extension of the ?catalyst? framework of [7], thus leading to an algorithm which is always superior
to accelerated batch algorithms.
2
Composite Decomposable Saddle-Point Problems
We now present our new algorithms on saddle-point problems and show a natural extension to
monotone operators later in Section 6. We thus consider the saddle-point problem defined in Eq. (1),
with the following assumptions:
(A) M is strongly (?, ?)-convex-concave, that is, the function (x, y) 7? M (x, y) ? ?2 kxk2 + ?2 kyk2 is
convex-concave. Moreover, we assume that we may compute the proximal operator of M , i.e., for
any (x0 , y 0 ) ? Rn+d (? is the step-length parameter associated with the prox operator):
prox?M (x0 , y 0 ) = arg min maxn ?M (x, y) + ?2 kx ? x0 k2 ? ?2 ky ? y 0 k2 .
x?Rd y?R
(2)
The values of ? and ? lead to the definition of a weighted Euclidean norm on Rn+d defined as
?(x, y)2 = ?kxk2 + ?kyk2 , with dual norm defined through ?? (x, y)2 = ??1 kxk2 + ? ?1 kyk2 .
Dealing with the two different scaling factors ? and ? is crucial for good performance, as these
may be very different, depending on the many arbitrary ways to set-up a saddle-point problem.
(B) K is convex-concave and has Lipschitz-continuous gradients; it is natural to consider the gradient
operator B : Rn+d ? Rn+d defined as B(x, y) = (?x K(x, y), ??y K(x, y)) ? Rn+d and
to consider L = sup?(x?x0 ,y?y0 )=1 ?? (B(x, y) ? B(x0 , y 0 )). The quantity L represents the
condition number of the problem.
n+d
(C) The vector-valued function B(x, y) = (?P
may be split into a
x K(x, y), ??y K(x, y)) ? R
family of vector-valued functions as B = i?I Bi , where the only constraint is that each Bi is
Lipschitz-continuous (with constant Li ). There is no need to assume the existence of a function
Ki : Rn+d ? R such that Bi = (?x Ki , ??y Ki ).
We will also consider splits
saddle-point
nature of the problem, that is,
P whichx are adapted
P to the
y
of the form B(x, y) =
k?K Bk (x, y),
j?J Bj (x, y) , which is a subcase of the above with
I = J ? K, Bjk (x, y) = (pj Bkx (x, y), qk Bjy (x, y)), for p and q sequences that sum to one. This
substructure, which we refer to as ?factored?, will only make a difference when storing the values
of these operators in Section 4 for our SAGA algorithm.
Given assumptions (A)-(B), the saddle-point problem in Eq. (1) has a unique solution (x? , y? ) such
that K(x? , y)+M (x? , y) 6 K(x? , y? )+M (x? , y? ) 6 K(x, y? )+M (x, y? ), for all (x, y); moreover
minx?Rd maxy?Rn K(x, y) + M (x, y) = maxy?Rn minx?Rd K(x, y) + M (x, y) (see, e.g., [8, 4]).
The main generic examples for the functions K(x, y) and M (x, y) are:
? Bilinear saddle-point problems: K(x, y) = y > Kx for a matrix K ? Rn?d (we identify here a
matrix with the associated bilinear function), for which
? the vector-valued function B(x, y) is linear,
i.e., B(x, y) = (K > y, ?Kx). Then L = kKkop / ??, where kKkop is the largest singular value
of K.
There are two natural potential splits with I = {1, . . . , n} ? {1, . . . , d}, with B =
Pn Pd
j=1
k=1 Bjk : (a) the split into individual elements Bjk (x, y) = Kjk (yj , ?xk ), where every element is the gradient operator of a bi-linear function, and (b) the ?factored? split into
>
rows/columns Bjk (x, y) = (qk yj Kj?
, ?pj xk K?k ), where Kj? and K?k are the j-th row and k-th
column of K, p and q are any set of vectors summing to one, and every element is not the gradient
operator of any function. These splits correspond to several ?sketches? of the matrix K [9], adapted
to subsampling of K, but other sketches could be considered.
2
? Separable functions: M (x, y) = f (x) ? g(y) where f is any ?-strongly-convex and g is ?strongly convex, for which the proximal operators prox?f (x0 ) = arg minx?Rd ?f (x)+ ?2 kx?x0 k2
and prox?g (y 0 ) = arg maxy?Rd ??g(y) ? ?2 ky ? y 0 k2 are easy to compute. In this situation,
prox?M (x0 , y 0 ) = (prox?f (x0 ), prox?g (y 0 )). Following the usual set-up of composite optimization [10], no smoothness assumption is made on M and hence on f or g.
2.1
Examples in machine learning
Many learning problems are formulated as convex optimization problems, and hence by duality as
saddle-point problems. We now give examples where our new algorithms are particularly adapted.
Supervised learning with non-separable losses or regularizers. For regularized linear supervised
learning, with n d-dimensional observations put in a design matrix K ? Rn?d , the predictions
are parameterized by a vector x ? Rd and lead to a vector of predictions Kx ? Rn . Given a loss
function defined through its Fenchel conjugate g ? from Rn to R, and a regularizer f (x), we obtain
exactly a bi-linear saddle-point problem. When the loss g ? or the regularizer f is separable, i.e., a
sum of functions of individual variables, we may apply existing fast gradient-techniques [1, 2, 3] to
the primal problem minx?Rd g ? (Kx) + f (x) or the dual problem maxy?Rn ?g(y) ? f ? (K > y), as
well as methods dedicated to separable saddle-point problems [11, 12]. When the loss g ? and the
regularizer f are not separable (but have a simple proximal operator), our new fast algorithms are the
only ones that can be applied from the class of large-scale linearly convergent algorithms.
Non-separable losses may occur when (a) predicting by affine functions of the inputs and not
penalizing the constant terms (in this case defining the loss functions as the minimum over the
constant term, which becomes non-separable) or (b) using structured output prediction methods
that lead to convex surrogates to the area under the ROC curve (AUC) or other precision/recall
quantities [13, 14]. These come often with efficient proximal operators (see Section 7 for an
example).
Non-separable regularizers with available efficient proximal operators are numerous, such as groupednorms with potentially overlapping groups, norms based on submodular functions, or total variation
(see [15] and references therein, and an example in Section 7).
Robust optimization. The framework of robust optimization [16] aims at optimizing an objective
function with uncertain data. Given that the aim is then to minimize the maximal value of the
objective function given the uncertainty, this leads naturally to saddle-point problems.
Convex relaxation of unsupervised learning. Unsupervised learning leads to convex relaxations
which often exhibit structures naturally amenable to saddle-point problems, e.g, for discriminative
clustering [17] or matrix factorization [18].
2.2
Existing batch algorithms
In this section, we review existing algorithms aimed at solving the composite saddle-point problem in
Eq. (1), without using the sum-structure. Note that it is often possible to apply batch algorithms for
the associated primal or dual problems (which are not separable in general).
Forward-backward (FB) algorithm. The main iteration is
1/? 0
(xt , yt ) = prox?M (xt?1 , yt?1 ) ? ? 0 1/? B(xt?1 , yt?1 )
= prox?M xt?1 ? ???1 ?x K(xt?1 , yt?1 ) + ?? ?1 ?y K(xt?1 , yt?1 )).
The algorithm aims at simultaneously minimizing with respect to x while maximizing with respect to y (when M (x, y) is the sum of isotropic quadratic terms and indicator functions, we get
simultaneous projected gradient descents). This algorithm is known not to converge in general [8],
but is linearly convergent for strongly-convex-concave problems, when ? = 1/L2 , with the rate
(1 ? 1/(1 + L2 ))t [19] (see simple proof in Appendix B.1). This is the one we are going to adapt to
stochastic variance reduction.
When M (x, y) = f (x) ? g(y), we obtain the two parallel updates xt = prox?f xt?1 ?
??1 ??x K(xt?1 , yt?1 and yt = prox?g yt?1 + ? ?1 ??y K(xt?1 , yt?1 , which can de done seri
ally by replacing the second one by yt = prox?g yt?1 + ? ?1 ??y K(xt , yt?1 . This is often referred
to as the Arrow-Hurwicz method (see [20] and references therein).
3
Accelerated forward-backward algorithm. The forward-backward algorithm may be accelerated
by a simple extrapolation step, similar to Nesterov?s acceleration for convex minimization [21]. The
algorithm from [20], which only applies to bilinear functions K, and which we extend from separable
M to our more general set-up in Appendix B.2, has the following iteration:
1/? 0
(xt , yt ) = prox?M (xt?1 , yt?1 ) ? ? 0 1/? B(xt?1 + ?(xt?1 ? xt?2 ), yt?1 + ?(yt?1 ? yt?2 )) .
With ? = 1/(2L) and ? = L/(L + 1), we get an improved convergence rate, where (1 ?
1/(1 + L2 ))t is replaced by (1 ? 1/(1 + 2L))t . This is always a strong improvement when L
is large (ill-conditioned problems), as illustrated in Section 7. Note that our acceleration technique in
Section 5 may be extended to get a similar rate for the batch set-up for non-linear K.
2.3
Existing stochastic algorithms
Forward-backward algorithms have been studied with added noise [22], leading to a convergence
rate in O(1/t) after t iterations for strongly-convex-concave problems. In our setting, we replace
B(x, y) in our algorithm with ?1i Bi (x, y), where i ? I is sampled from the probability vector (?i )i
(good probability vectors will depend on the application, see below for bilinear problems). We have
EBi (x, y) = B(x, y); the main iteration is then
1/? 0
(xt , yt ) = prox?Mt (xt?1 , yt?1 ) ? ?t 0 1/? ?1i Bit (xt?1 , yt?1 ) ,
t
with it selected independently at random in I with probability vector ?. In Appendix C, we show that
? 2 ) leads to a convergence rate in O(1/t), where L(?)
?
using ?t = 2/(t + 1 + 8L(?)
is a smoothness
constant explicited below. For saddle-point problems, it leads to the complexities shown in Table 1.
Like for convex minimization, it is fast early on but the performance levels off. Such schemes are
typically used in sublinear algorithms [23].
2.4
Sampling probabilities, convergence rates and running-time complexities
In order to characterize running-times, we denote by T (A) the complexity of computing A(x, y)
for any operator A and (x, y) ? Rn+d , while we denote by Tprox (M ) the complexity of computing
prox?M (x, y). In our motivating example of bilinear functions K(x, y), we assume that Tprox (M )
takes times proportional to n + d and getting a single element of K is O(1).
In order to characterize the convergence rate, we need the Lipschitz-constant L (which happens to
be the condition number with our normalization) defined earlier as well as a smoothness constant
adapted to our sampling schemes:
1 ?
0 0 2
0
0 2
? 2 = sup(x,y,x0 ,y0 ) P
L(?)
i?I ?i ? (Bi (x, y) ? Bi (x , y )) such that ?(x ? x , y ? y ) 6 1.
1
? 2 6 maxi?I L2 ? P
We always have the bounds L2 6 L(?)
i
i?I ?i . However, in structured situations
(like in bilinear saddle-point problems), we get much improved bounds, as described below.
?
Bi-linear saddle-point. The constant L is equal to kKk
op / ??, and we will consider as well
P
2
the Frobenius norm kKkF defined through kKk2F = j,k Kjk
, and the norm kKkmax defined as
1/2
1/2
kKkmax = max{supj (KK > )jj , supk (K > K)kk }. Among the norms above, we always have:
p
p
kKkmax 6 kKkop 6 kKkF 6 max{n, d}kKkmax 6 max{n, d}kKkop ,
(3)
which allows to show below that some algorithms have better bounds than others.
There are several schemes to choose the probabilities ?jk (individual splits) and ?jk = pj qk (factored splits). For the factored formulation where we select random rows and columns, we consider the non-uniform schemes pj = (KK > )jj /kKk2Fpand qk = (K > K)kk /kKk2F , leading to
?
?
?
?
L(?)
6 kKkF / ??, or uniform, leading to L(?)
6 max{n, d}kKkmax / ??. For the indi2
vidual formulation where we select random elements, we consider ?jk = Kjk
/kKk2F , leading to
p
?
?
?
?
?
L(?)
6 max{n, d}kKkF / ??, or uniform, leading to L(?)
6 ndkKkmax / ?? (in these
situations, it is important to select several elements simultaneously, which our analysis supports).
We characterize convergence with the quantity ? = ?(x ? x? , y ? y? )2 /?(x0 ? x? , y0 ? y? )2 , where
(x0 , y0 ) is the initialization of our algorithms (typically (0, 0) for bilinear saddle-points). In Table 1
we give a summary of the complexity of all algorithms discussed in this paper: we recover the same
type of speed-ups as for convex minimization. A few points are worth mentioning:
4
Algorithms
Complexity
Batch FB
log(1/?) ?
Batch FB-accelerated
log(1/?) ?
Stochastic FB-non-uniform
(1/?) ?
Stochastic FB-uniform
(1/?) ?
SAGA/SVRG-uniform
log(1/?) ?
SAGA/SVRG-non-uniform
log(1/?) ?
SVRG-non-uniform-accelerated log(1/?) ?
nd + ndkKk2op /(??)
?
nd + ndkKkop / ??)
max{n, d}kKk2F /(??)
ndkKk2max /(??)
nd + ndkKk2max /(??)
nd + max{n, d}kKk2F /(??)
p
?
nd + nd max{n, d}kKkF / ??
Table 1: Summary of convergence results for the strongly (?, ?)-convex-concave bilinear saddle-point
problem with matrix K and individual splits (and n + d updates per iteration). For factored splits
(little difference), see Appendix D.4. For accelerated SVRG, we omitted the logarithmic term (see
Section 5).
? Given the bounds between the various norms on K in Eq. (3), SAGA/SVRG with non-uniform
sampling always has convergence bounds superior to SAGA/SVRG with uniform sampling, which
is always superior to batch forward-backward. Note however, that in practice, SAGA/SVRG with
uniform sampling may be inferior to accelerated batch method (see Section 7).
? Accelerated SVRG with non-uniform sampling is the most efficient method, which is confirmed
in our experiments. Note that if n = d, our bound is better than or equal to accelerated forwardbackward, in exactly the same way than for regular convex minimization. There is thus a formal
advantage for variance reduction.
3
SVRG: Stochastic Variance Reduction for Saddle Points
Following
[2, 24], we consider a stochastic-variance reduced estimation of the finite sum B(x, y) =
P
B
(x,
y). This is achieved by assuming that we have an iterate (?
x, y?) with a known value of
i
i?I
B(?
x, y?), and we consider the estimate of B(x, y):
B(?
x, y?) +
1
?i Bi (x, y)
?
1
x, y?),
?i Bi (?
which has the correct expectation when i is sampled from I with probability ?, but with a reduced
variance. Since we need to refresh (?
x, y?) regularly, the algorithm works in epochs (we allow to
sample m elements per updates, i.e., a mini-batch of size m), with an algorithm that shares the same
structure than SVRG for convex minimization; see Algorithm 1. Note that we provide an explicit
? 2 /m). We have the following theorem,
number of iterations per epoch, proportional to (L2 + 3L
shown in Appendix D.1 (see also a discussion of the proof in Section 6).
Theorem 1 Assume (A)-(B)-(C). After v epochs of Algorithm 1, we have:
E ?(xv ? x? , yv ? y? )2 6 (3/4)v ?(x0 ? x? , y0 ? y? )2 .
The computational complexity to reach precision
? is proportional to T (B) + (mL2 +
? 2 ) maxi?I T (Bi ) + (1 + L2 + L
? 2 /m)Tprox (M ) log 1 . Note that by taking the mini-batch size m
L
?
large, we can alleviate the complexity of the proximal operator proxM if too large. Moreover, if L2
? 2 but with a worse complexity bound.
is too expensive to compute, we may replace it by L
Bilinear saddle-point problems. When using a mini-batch size m = 1 with the factored updates,
or m = n + d for the individual updates, we get the same complexities proportional to [nd +
max{n, d}kKk2F /(??)] log(1/?) for non-uniform sampling, which improve significantly over (nonaccelerated) batch methods (see Table 1).
4
SAGA: Online Stochastic Variance Reduction for Saddle Points
P
Following [3], we consider a stochastic-variance reduced estimation of B(x, y) = i?I Bi (x, y).
This is achieved by assuming that we store values g i = Bi (xold(i) , y old(i) ) for an old iterate
5
Algorithm 1 SVRG: Stochastic Variance Reduction for Saddle Points
?
Input: Functions (Ki )i , M , probabilities (?i )i , smoothness L(?)
and L, iterate (x, y)
number of epochs v, number of updates per iteration (mini-batch size) m
? 2 /m ?1
Set ? = L2 + 3L
for u = 1 to v do
Initialize (?
x, y?) = (x, y) and compute B(?
x, y?)
? 2 /m) do
for k = 1 to log 4 ? (L2 + 3L
Sample i1 , . . . , im ? I from the probability vector (?i )i with replacement
Pm 1
1/? 0
1
1
(x, y) ? prox?M (x, y)?? 0 1/? B(?
x, y?)
x, y?)+ m
k=1 ?i Bik (x, y)? ?i Bik (?
k
k
end for
end for
Output: Approximate solution (x, y)
(xold(i) , y old(i) ), and we consider the estimate of B(x, y):
P
1
j
j?I g + ?i Bi (x, y) ?
1 i
?i g ,
which has the correct expectation when i is sampled from I with probability ?. At every iteration,
we also refresh the operator values g i ? Rn+d , for the same index i or with a new index i sampled
uniformly at random. This leads to Algorithm 2, and we have the following theorem showing linear
convergence, proved in Appendix D.2. Note that for bi-linear saddle-points, the initialization at (0, 0)
has zero cost (which is not possible for convex minimization).
Theorem 2 Assume (A)-(B)-(C). After t iterations of Algorithm 2 (with the option of resampling
when using non-uniform sampling), we have:
?2
L2
L
?1 t
E ?(xt ? x? , yt ? y? )2 6 2 1 ? (max{ 3|I|
?(x0 ? x? , y0 ? y? )2 .
2m , 1 + ?2 + 3 m?2 })
Resampling or re-using the same gradients. For the bound above to be valid for non-uniform
sampling, like for convex minimization [25], we need to resample m operators after we make
the iterate update. In our experiments, following [25], we considered a mixture of uniform and
non-uniform sampling, without the resampling step.
SAGA vs. SVRG. The difference between the two algorithms is the same as for convex minimization
(see, e.g., [26] and references therein), that is SVRG has no storage, but works in epochs and requires
slightly more accesses to the oracles, while SAGA is a pure online method with fewer parameters but
requires some storage (for bi-linear saddle-point problems, we only need to store O(n+d) elements
for the factored splits, while we need O(dn) for the individual splits). Overall they have the same
running-time complexity for individual splits; for factored splits, see Appendix D.4.
Factored splits. When using factored splits, we need to store the two parts of the operator values
separately and update them independently, leading in Theorem 2 to replacing |I| by max{|J|, |K|}.
5
Acceleration
Following the ?catalyst? framework of [7], we consider a sequence of saddle-point problems with
added regularization; namely, given (?
x, y?), we use SVRG to solve approximately
min max K(x, y) + M (x, y) +
x?Rd y?Rn
??
2 kx
?x
?k2 ?
??
2 ky
? y?k2 ,
(4)
for well-chosen ? and (?
x, y?). The main iteration of the algorithm differs from the original SVRG by
the presence of the iterate (?
x, y?), which is updated regularly (after a precise number of epochs), and
different step-sizes (see details in Appendix D.3). The complexity to get an approximate solution of
Eq. (4) (forgetting the complexity of the proximal operator and for a single update), up to logarithmic
? 2 (1 + ? )?2 maxi?I T (Bi ).
terms, is proportional, to T (B) + L
The key difference with the convex optimization set-up is that the analysis is simpler, without
the need for Nesterov acceleration machinery [21] to define a good value of (?
x, y?); indeed, the
solution of Eq. (4) is one iteration of the proximal-point algorithm, which is known to converge
6
Algorithm 2 SAGA: Online Stochastic Variance Reduction for Saddle Points
?
Input: Functions (Ki )i , M , probabilities (?i )i , smoothness L(?)
and L, iterate (x, y)
number of iterations t, number of updates per iteration (mini-batch size) m
? 2 ?1
L
2
Set ? = max{ 3|I|
2m ? 1, L + 3 m }
P
Initialize g i = Bi (x, y) for all i ? I and G = i?I g i
for u = 1 to t do
Sample i1 , . . . , im ? I from the probability vector (?i )i with replacement
Compute hk = Bik (x, y) for k ? {1, . . . , m}
Pm 1
1/? 0
1
1 ik
(x, y) ? prox?M (x, y) ? ? 0 1/? G + m
k=1 ?i hk ? ?i g
k
k
(optional) Sample i1 , . . . , im ? I uniformly with replacement
(optional) ComputeP
hk = Bik (x, y) for k ? {1, . . . , m}
m
Replace G ? G ? k=1 {g ik ? hk } and g ik ? hk for k ? {1, . . . , m}
end for
Output: Approximate solution (x, y)
1
linearly [27] with rate (1 + ? ?1 )?1 = (1 ? 1+?
). Thus the overall complexity is up to loga2
?
rithmic terms equalp
to T (B)(1 + ? ) + L (1 + ? )?1 maxi?I T (Bi ). The trade-off in ? is opti?
mal
p for 1 + ? = L maxi?I T (Bi )/T (B), showing that
p there is a potential acceleration when
? T (B) maxi?I T (Bi ).
? maxi?I T (Bi )/T (B) > 1, leading to a complexity L
L
Since the SVRG algorithm already works in epochs, this leads to a simple modification where every
log(1 + ? ) epochs, we change the values of (?
x, y?). See Algorithm 3 in Appendix D.3. Moreover, we
can adaptively update (?
x, y?) more aggressively to speed-up the algorithm.
The following theorem gives the convergence rate of the method (see proof
p in Appendix D.3).
? F
With the value of ? defined above (corresponding to ? = max 0, kKk
max{n?1 , d?1 } ? 1
??
p
? T (B) maxi?I T (Bi ), up to the logarithmic term
for bilinear problems), we get the complexity L
log(1 + ? ). For bilinear problems, this provides a significant acceleration, as shown in Table 1.
Theorem 3 Assume (A)-(B)-(C). After v epochs of Algorithm 3, we have, for any positive v:
v
1
E ?(xv ? x? , yv ? y? )2 6 1 ? ? +1
?(x0 ? x? , y0 ? y? )2 .
While we provide a proof only for SVRG, the same scheme should work for SAGA. Moreover, the
same idea also applies to the batch setting (by simply considering |I| = 1, i.e., a single function),
leading to an acceleration, but now valid for all functions K (not only bilinear).
6
Extension to Monotone Operators
In this paper, we have chosen to focus on saddle-point problems because of their ubiquity in machine
learning. However, it turns out that our algorithm and, more importantly, our analysis extend
to all set-valued monotone operators [8, 28]. We thus consider a maximal strongly-monotone
operator A on a Euclidean space E, as wellP
as a finite family of Lipschitz-continuous (not necessarily
monotone) operators Bi , i ? I, with B = i?I Bi monotone. Our algorithm then finds the zeros of
P
A + i?I Bi = A + B, from the knowledge of the resolvent (?backward?) operator (I + ?A)?1
(for a well chosen ? > 0) and the forward operators Bi , i ? I. Note the difference with [29], which
requires each Bi to be monotone with a known resolvent and A to be monotone and single-valued.
There several interesting examples (on which our algorithms apply):
? Saddle-point problems: We assume for simplicity that ? = ? = ? (this can be achieved by a
simple change of variable). If we denote B(x, y) = (?x K(x, y), ??y K(x, y)) and the multivalued operator A(x, y) = (?x M (x, y), ??y M (x, y)), then the proximal operator prox?M may be
written as (?I + ?A)?1 (?x, ?y), and we recover exactly our framework from Section 2.
? Convex minimization: A = ?g and Bi = ?fi for a strongly-convex function g and smooth
funcP
tions fi : we recover proximal-SVRG [24] and SAGA [3], to minimize minz?E g(z) + i?I fi (z).
However, this is a situation where the operators Bi have an extra property called co-coercivity [6],
7
which we are not using because it is not satisfied for saddle-point problems. The extension of
SAGA and SVRG to monotone operators was proposed earlier by [30], but only co-coercive operators are considered, and thus only convex minimization is considered (with important extensions
beyond plain SAGA and SVRG), while our analysis covers a much broader set of problems. In
particular, the step-sizes obtained with co-coercivity lead to divergence in the general setting.
Because we do not use co-coercivity, applying our results directly to convex minimization, we
would get slower rates, while, as shown in Section 2.1, they can be easily cast as a saddle-point
problem if the proximal operators of the functions fi are known, and we then get the same rates
than existing fast techniques which are dedicated to this problem [1, 2, 3].
? Variational inequality problems, which are notably common in game theory (see, e.g., [5]).
7
Experiments
We consider binary classification problems with design matrix K and label vector in {?1, 1}n , a
non-separable strongly-convex regularizer with an efficient
proximal operator (the sum of the squared
P
norm ?kxk2 /2 and the clustering-inducing term i6=j |xi ? xj |, for which the proximal operator
may be computed in O(n log n) by isotonic regression [31]) and aPnon-separable
smooth loss (a
P
surrogate to the area under the ROC curve, defined as proportional to i+ ?I+ i? ?I? (1 ? yi + yj )2 ,
where I+ /I? are sets with positive/negative labels, for a vector of prediction y, for which an efficient
proximal operator may be computed as well, see Appendix E).
Our upper-bounds depend on the ratio kKk2F /(??) where ? is the regularization strength and ? ? n
in our setting where we minimize an average risk. Setting ? = ?0 = kKk2F /n2 corresponds to a
regularization proportional to the average squared radius of the data divided by 1/n which is standard
in this setting [1]. We also experiment with smaller regularization (i.e., ?/?0 = 10?1 ), to make
the problem more ill-conditioned (it turns out that the corresponding testing losses are sometimes
slightly better). We consider two datasets, sido (n = 10142, d = 4932, non-separable losses and
regularizers presented above) and rcv1 (n = 20242, d = 47236, separable losses and regularizer
described in Appendix F, so that we can compare with SAGA run in the primal). We report below the
squared distance to optimizers which appears in our bounds, as a function of the number of passes on
the data (for more details and experiments with primal-dual gaps and testing losses, see Appendix F).
Unless otherwise specified, we always use non-uniform sampling.
sido ? distance to optimizers ? ?/?0=1.00
0
sido ? distance to optimizers ? ?/?0=0.10
0
10
10
fb?acc
fb?sto
saga
saga (unif)
svrg
svrg?acc
fba?primal
?5
10
fb?acc
fb?sto
saga
saga (unif)
svrg
svrg?acc
fba?primal
?5
0
100
200
300
400
500
10
rcv1 ? distance to optimizers ? ?/?0=1.00
0
10
0
fb?acc
fb?sto
saga
saga (unif)
svrg
svrg?acc
fba?primal
saga?primal
?5
10
?10
10
?15
100
200
300
400
500
10
0
100
200
300
400
500
We see that uniform sampling for SAGA does not improve on batch methods, SAGA and accelerated
SVRG (with non-uniform sampling) improve significantly over the existing methods, with a stronger
gain for the accelerated version for ill-conditioned problems (middle vs. left plot). On the right plot,
we compare to primal methods on a separable loss, showing that primal methods (here ?fba-primal?,
which is Nesterov acceleration) that do not use separability (and can thus be applied in all cases)
are inferior, while SAGA run on the primal remains faster (but cannot be applied for non-separable
losses).
8
Conclusion
We proposed the first linearly convergent incremental gradient algorithms for saddle-point problems,
which improve both in theory and practice over existing batch or stochastic algorithms. While we
currently need to know the strong convexity-concavity constants, we plan to explore in future work
adaptivity to these constants like already obtained for convex minimization [3], paving the way to an
analysis without strong convexity-concavity.
8
References
[1] N. Le Roux, M. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence rate
for finite training sets. In Adv. NIPS, 2012.
[2] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In
Adv. NIPS, 2013.
[3] A. Defazio, F. Bach, and S. Lacoste-Julien. SAGA: A fast incremental gradient method with support for
non-strongly convex composite objectives. In Adv. NIPS, 2014.
[4] R. T. Rockafellar. Monotone operators associated with saddle-functions and minimax problems. Nonlinear
Functional Analysis, 18(part 1):397?407, 1970.
[5] P. T. Harker and J.-S. Pang. Finite-dimensional variational inequality and nonlinear complementarity
problems: a survey of theory, algorithms and applications. Math. Prog., 48(1-3):161?220, 1990.
[6] D. L. Zhu and P. Marcotte. Co-coercivity and its role in the convergence of iterative schemes for solving
variational inequalities. SIAM Journal on Optimization, 6(3):714?726, 1996.
[7] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for first-order optimization. In Adv. NIPS, 2015.
[8] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces.
Springer Science & Business Media, 2011.
[9] D. Woodruff. Sketching as a tool for numerical linear algebra. Technical Report 1411.4357, arXiv, 2014.
[10] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[11] X. Zhu and A. J. Storkey. Adaptive stochastic primal-dual coordinate descent for separable saddle point
problems. In Machine Learning and Knowledge Discovery in Databases, pages 645?658. Springer, 2015.
[12] Y. Zhang and L. Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization.
In Proc. ICML, 2015.
[13] T. Joachims. A support vector method for multivariate performance measures. In Proc. ICML, 2005.
[14] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In Adv.
NIPS, 1999.
[15] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, 2012.
[16] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, 2009.
[17] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In Adv. NIPS, 2004.
[18] F. Bach, J. Mairal, and J. Ponce. Convex sparse matrix factorizations. Technical Report 0812.1869, arXiv,
2008.
[19] G. H. G. Chen and R. T. Rockafellar. Convergence rates in forward-backward splitting. SIAM Journal on
Optimization, 7(2):421?444, 1997.
[20] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to
imaging. Journal of Mathematical Imaging and Vision, 40(1):120?145, 2011.
[21] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer, 2004.
[22] L. Rosasco, S. Villa, and B. C. V?u. A stochastic forward-backward splitting method for solving monotone
inclusions in hilbert spaces. Technical Report 1403.7999, arXiv, 2014.
[23] K. L. Clarkson, E. Hazan, and D. P. Woodruff. Sublinear optimization for machine learning. Journal of the
ACM (JACM), 59(5):23, 2012.
[24] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM
Journal on Optimization, 24(4):2057?2075, 2014.
[25] M. Schmidt, R. Babanezhad, M.O. Ahmed, A. Defazio, A. Clifton, and A. Sarkar. Non-uniform stochastic
average gradient method for training conditional random fields. In Proc. AISTATS, 2015.
[26] R. Harikandeh, M. O. Ahmed, A. Virani, M. Schmidt, J. Kone?cn`y, and S. Sallinen. Stop wasting my
gradients: Practical SVRG. In Adv. NIPS, 2015.
[27] R. T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on Control and
Optimization, 14(5):877?898, 1976.
[28] E. Ryu and S. Boyd. A primer on monotone operator methods. Appl. Comput. Math., 15(1):3?43, 2016.
[29] H. Raguet, J. Fadili, and G. Peyr?. A generalized forward-backward splitting. SIAM Journal on Imaging
Sciences, 6(3):1199?1226, 2013.
[30] D. Davis. Smart: The stochastic monotone aggregated root-finding algorithm. Technical Report 1601.00698,
arXiv, 2016.
[31] X. Zeng and M. Figueiredo. Solving OSCAR regularization problems by fast approximate proximal
splitting algorithms. Digital Signal Processing, 31:124?135, 2014.
9
| 6471 |@word middle:1 version:1 norm:8 stronger:1 nd:7 unif:3 subcase:1 reduction:10 woodruff:2 ecole:2 existing:7 written:1 readily:1 refresh:2 numerical:1 plot:2 update:11 resampling:3 v:2 selected:1 fewer:1 xk:2 isotropic:1 provides:1 math:2 herbrich:1 simpler:1 zhang:3 mathematical:1 dn:1 ik:3 prove:1 introductory:1 x0:15 notably:1 forgetting:1 indeed:1 little:1 considering:1 becomes:1 spain:1 moreover:5 fba:4 medium:1 coercive:1 finding:2 wasting:1 every:5 concave:12 tackle:1 sag:1 exactly:3 k2:6 control:1 positive:2 pock:1 xv:2 bilinear:13 opti:1 approximately:1 inria:3 therein:3 studied:1 initialization:2 appl:1 co:5 mentioning:1 factorization:2 nemirovski:1 bi:31 bjk:4 unique:1 practical:1 yj:3 testing:2 practice:4 differs:2 optimizers:4 area:2 empirical:1 universal:1 significantly:3 composite:4 boyd:1 ups:1 regular:1 get:9 cannot:1 operator:39 put:1 storage:2 applying:1 risk:2 isotonic:1 yt:21 maximizing:1 straightforward:2 fadili:1 independently:2 convex:44 survey:1 decomposable:1 simplicity:1 roux:1 pure:1 splitting:4 factored:10 importantly:1 notion:3 variation:1 coordinate:2 updated:1 complementarity:1 element:8 storkey:1 expensive:1 particularly:1 jk:3 trend:1 database:1 role:1 mal:1 adv:7 trade:1 forwardbackward:1 pd:1 convexity:2 complexity:16 nesterov:4 depend:2 solving:4 algebra:1 smart:1 predictive:1 balamurugan:2 easily:3 various:1 regularizer:6 fast:7 seri:1 larger:1 valued:5 solve:1 otherwise:1 online:3 sequence:2 advantage:1 neufeld:1 maximal:2 fr:2 frobenius:1 inducing:2 ky:3 ebi:1 getting:1 convergence:16 incremental:4 ben:1 tions:1 depending:1 sallinen:1 virani:1 op:1 eq:6 strong:3 come:3 ml2:1 radius:1 correct:2 stochastic:22 alleviate:1 im:3 extension:8 considered:4 babanezhad:1 algorithmic:3 bj:1 early:1 omitted:1 resample:1 estimation:2 proc:3 label:2 currently:1 robbins:1 largest:1 tool:1 weighted:1 minimization:18 always:8 aim:3 normale:2 pn:1 shrinkage:1 broader:1 focus:1 joachim:1 improvement:1 ponce:1 rank:1 hk:5 el:1 typically:2 going:1 i1:3 arg:3 dual:7 ill:3 among:1 overall:2 classification:1 plan:1 initialize:2 kkkop:4 equal:2 field:1 sampling:15 represents:1 progressive:1 unsupervised:2 icml:2 future:1 others:1 report:5 few:1 simultaneously:2 divergence:1 individual:9 beck:1 replaced:1 replacement:3 mixture:1 kone:1 primal:15 regularizers:4 kjk:3 amenable:1 partial:2 machinery:1 unless:1 euclidean:2 old:3 re:1 uncertain:1 fenchel:3 column:3 earlier:2 teboulle:1 cover:1 maximization:1 cost:1 uniform:23 johnson:1 peyr:1 too:2 motivating:1 bauschke:1 characterize:3 proximal:18 my:1 adaptively:1 siam:6 off:2 sketching:1 sido:3 squared:3 satisfied:1 opposed:1 choose:1 rosasco:1 worse:1 derivative:2 leading:10 li:1 potential:2 prox:18 de:1 rockafellar:3 blind:1 resolvent:2 later:1 root:1 extrapolation:1 hazan:1 sup:4 francis:2 yv:2 recover:3 option:1 parallel:1 substructure:1 monro:1 contribution:1 minimize:3 pang:1 variance:13 qk:4 correspond:1 identify:1 kkkf:5 worth:1 confirmed:1 acc:6 simultaneous:1 reach:1 definition:1 naturally:3 associated:6 proof:4 sampled:4 gain:1 proved:1 stop:1 recall:1 knowledge:2 multivalued:1 hilbert:2 graepel:1 jenatton:1 appears:1 supervised:3 improved:2 formulation:4 done:2 strongly:11 chambolle:1 sketch:2 ally:1 replacing:2 zeng:1 nonlinear:2 overlapping:1 hence:2 regularization:5 aggressively:1 illustrated:1 game:1 kyk2:3 auc:1 inferior:2 davis:1 larson:1 funcp:1 generalized:1 dedicated:2 variational:5 fi:4 common:3 superior:4 functional:1 mt:1 extend:4 interpretation:1 discussed:1 kluwer:1 proxm:1 refer:1 significant:1 bkx:1 smoothness:5 rd:10 pm:2 bjy:1 i6:1 inclusion:1 submodular:1 access:2 multivariate:1 recent:1 optimizing:1 rieure:2 store:3 inequality:5 binary:1 yi:1 minimum:1 converge:2 aggregated:1 signal:1 harchaoui:1 smooth:4 technical:4 faster:1 adapt:1 ahmed:2 bach:6 lin:1 divided:1 prediction:4 regression:2 vision:1 expectation:2 arxiv:4 iteration:14 normalization:1 sometimes:1 achieved:3 separately:1 singular:1 crucial:1 extra:1 pass:1 regularly:2 leveraging:1 bik:4 marcotte:1 presence:1 split:22 easy:2 iterate:6 xj:1 idea:1 cn:1 hurwicz:1 defazio:2 accelerating:1 penalty:1 clarkson:1 jj:2 aimed:1 reduced:3 per:5 group:1 key:3 pj:4 penalizing:1 lacoste:1 backward:9 imaging:4 relaxation:2 monotone:17 sum:8 run:2 inverse:1 parameterized:1 uncertainty:1 oscar:1 prog:1 family:2 appendix:13 scaling:1 bit:1 ki:5 bound:10 convergent:4 quadratic:1 oracle:1 adapted:4 occur:2 strength:1 constraint:1 tal:1 speed:2 min:3 rcv1:2 separable:20 structured:2 maxn:1 conjugate:2 smaller:1 slightly:2 separability:3 y0:7 saddlepoint:1 modification:1 happens:1 maxy:5 ghaoui:1 remains:1 turn:2 know:1 ordinal:1 supj:1 end:3 available:1 apply:5 generic:1 ubiquity:1 batch:18 paving:1 schmidt:3 slower:1 primer:1 existence:1 original:1 clustering:3 subsampling:1 running:3 opportunity:2 objective:6 added:2 quantity:3 already:2 usual:1 surrogate:2 obermayer:1 exhibit:2 gradient:16 minx:5 villa:1 distance:4 assuming:2 length:1 index:2 kk:4 mini:5 minimizing:1 ratio:1 potentially:3 negative:1 design:3 upper:1 observation:2 datasets:1 finite:5 descent:4 optional:2 situation:4 extended:2 defining:1 precise:1 rn:18 arbitrary:1 sarkar:1 bk:1 namely:1 paris:2 cast:1 specified:1 ryu:1 barcelona:1 nip:8 beyond:2 below:5 sparsity:1 challenge:2 max:16 natural:4 business:1 regularized:2 predicting:1 indicator:1 zhu:2 minimax:1 scheme:6 improve:4 numerous:1 julien:1 kj:2 review:1 epoch:9 l2:11 discovery:1 catalyst:4 loss:16 lecture:1 sublinear:2 interesting:1 adaptivity:1 proportional:7 digital:1 foundation:1 raguet:1 affine:1 xiao:2 thresholding:1 storing:1 share:1 row:3 summary:2 svrg:31 figueiredo:1 formal:1 allow:1 taking:1 sparse:1 curve:2 plain:2 boundary:1 valid:2 fb:11 concavity:2 forward:9 made:1 adaptive:1 projected:1 approximate:4 monotonicity:1 dealing:1 mairal:3 summing:1 discriminative:1 xi:1 continuous:3 iterative:2 table:5 nature:1 robust:3 schuurmans:1 necessarily:1 coercivity:4 aistats:1 main:4 linearly:5 arrow:1 noise:1 n2:1 xu:1 referred:1 en:1 roc:2 sto:3 combettes:1 precision:2 saga:29 explicit:1 exponential:1 comput:1 kxk2:4 minz:1 theorem:7 xt:20 showing:5 kkk:2 maxi:8 conditioned:3 kx:8 margin:2 gap:1 chen:1 led:1 logarithmic:3 simply:1 saddle:42 rithmic:1 explore:1 jacm:1 lagrange:1 supk:1 applies:3 springer:2 clifton:1 corresponds:2 acm:1 obozinski:1 conditional:1 formulated:1 acceleration:8 lipschitz:4 replace:3 change:2 uniformly:2 total:1 called:1 duality:2 select:3 support:3 accelerated:15 princeton:1 |
6,049 | 6,472 | Simple and Efficient Weighted Minwise Hashing
Anshumali Shrivastava
Department of Computer Science
Rice University
Houston, TX, 77005
anshumali@rice.edu
Abstract
Weighted minwise hashing (WMH) is one of the fundamental subroutine, required by many celebrated approximation algorithms, commonly
adopted in industrial practice for large -scale search and learning. The
resource bottleneck with WMH is the computation of multiple (typically a
few hundreds to thousands) independent hashes of the data. We propose
a simple rejection type sampling scheme based on a carefully designed
red-green map, where we show that the number of rejected sample has
exactly the same distribution as weighted minwise sampling. The running
time of our method, for many practical datasets, is an order of magnitude
smaller than existing methods. Experimental evaluations, on real datasets,
show that for computing 500 WMH, our proposal can be 60000x faster
than the Ioffe?s method without losing any accuracy. Our method is also
around 100x faster than approximate heuristics capitalizing on the efficient
?densified" one permutation hashing schemes [26, 27]. Given the simplicity
of our approach and its significant advantages, we hope that it will replace
existing implementations in practice.
1
Introduction
(Weighted) Minwise Hashing (or Sampling), [2, 4, 17] is the most popular and successful
randomized hashing technique, commonly deployed in commercial big-data systems for
reducing the computational requirements of many large-scale applications [3, 1, 25].
Minwise sampling is a known LSH for the Jaccard similarity [22]. Given two positive vectors
x, y ? RD , x, y > 0, the (generalized) Jaccard similarity is defined as
PD
min{xi , yi }
J(x, y) = PDi=1
.
(1)
i=1 max{xi , yi }
J(x, y) is a frequently used measure for comparing web-documents [2], histograms (specially
images [13]), gene sequences [23], etc. Recently, it was shown to be a very effective kernel for
large-scale non-linear learning [15]. WMH leads to the best-known LSH for L1 distance [13],
commonly used in computer vision, improving over [7].
Weighted Minwise Hashing (WMH) (or Minwise Sampling) generates randomized hash (or
fingerprint) h(x), of the given data vector x ? 0, such that for any pair of vectors x and y, the
probability of hash collision (or agreement of hash values) is given by,
P
min{xi , yi }
Pr(h(x) = h(y)) = P
= J(x, y).
(2)
max{xi , yi }
A notable special case is when x and y are binary
(or sets), i.e. xi , yi ? {0, 1}D . For this case,
P
min{x
the similarity measure boils down to J(x, y) = P max{xii,y,yii}} = |x?y|
|x?y| .
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Being able to generate a randomized signature, h(x), satisfying Equation 2 is the key breakthrough behind some of the best-known approximations algorithms for metric labelling [14],
metric embedding [5], mechanism design, and differential privacy [8].
A typical requirement for algorithms relying on minwise hashing is to generate, some large
enough, k independent Minwise hashes (or fingerprints) of the data vector x, i.e. compute
hi (x) i ? {1, 2, ..., k} repeatedly with independent randomization. These independent hashes
can then be used for a variety of data mining tasks such as cheap similarity estimation,
indexing for sublinear-search, kernel features for large scale learning, etc. The bottleneck
step in all these applications is the costly computation of the multiple hashes, which requires
multiple passes over the data. The number of required hashes typically ranges from few
hundreds to several thousand [26]. For example, the number of hashes required by the
famous LSH algorithm is O(n? ) which grows with the size of the data. [15] showed the
necessity of around 4000 hashes per data vector in large-scale learning with J(x, y) as the
kernel, making hash generation the most costly step.
Owing to the significance of WMH and its impact in practice, there is a series of work over
the last decade trying to reduce its costly computation cost [11].The first groundbreaking
work on Minwise hashing [2] computed hashes h(x) only for unweighted sets x (or binary
vectors), i.e. when the vector components xi s can only take values 0 and 1. Later it was
realized that vectors with positive integer weights, which are equivalent to weighted
sets, can be reduced to unweighted set by replicating elements in proportion to their
weights [10, 11]. This scheme was very expensive due to blowup in the number of elements
caused by replications. Also, it cannot handle real weights. In [11], the authors showed few
approximate solutions to reduce these replications.
Later [17], introduced the concept of consistent weighted sampling (CWS), which focuses
on sampling directly from some well-tailored distribution to avoid any replication. This
method, unlike previous ones, could handle real weights exactly. Going a step further,
Ioffe [13] was able to compute the exact distribution of minwise sampling leading to a
scheme with worst case O(d), where d is the number of non-zeros. This is the fastest known
exact weighted minwise sampling scheme, which will also be our main baseline.
O(dk) for computing k independent hashes is very expensive for modern massive datasets,
especially when k with ranges up to thousands. Recently, there was a big success for the
binary case, where using the novel idea of ?Densification" [26, 27, 25] the computation time
for unweighted minwise was brought down to O(d + k). This resulted in over 100-1000 fold
improvement. However, this speedup was limited only to binary vectors. Moreover, the
samples were not completely independent.
Capitalizing on recent advances for fast unweighted minwise hashing, [11] exploited the old
idea of replication to convert weighted sets into unweighted sets. To deal with non-integer
weights, the method samples the coordinates with probabilities proportional to leftover
weights. The overall process converts the weighted minwise sampling to an unweighted
problem, however, at a cost of incurring some bias (see Algorithm 2). This scheme is faster
than Ioffe?s scheme but, unlike other prior works on CWS, it is not exact and leads to biased
and correlated samples. Moreover, it requires strong and expensive independence [12].
All these lines of work lead to a natural question: does there exist an unbiased and independent WMH scheme with same property as Ioffe?s hashes but significantly faster than all
existing methodologies? We answer this question positively.
1.1 Our Contributions:
1. We provide an unbiased weighted minwise hashing scheme, where each hash computation takes time inversely proportional to effective sparsity (define later) which can be an
order of magnitude (even more) smaller than O(d). This improves upon the best-known
scheme in the literature by Ioffe [13] for a wide range of datasets. Experimental evaluations
on real datasets show more than 60000x speedup over the best known exact scheme and
around 100x times faster than biased approximate schemes based on the recent idea of fast
minwise hashing.
2. In practice, our hashing scheme requires much fewer bits usually (5-9) bits instead of
64 bits (or higher) required by existing schemes, leading to around 8x savings in space, as
shown on real datasets.
2
3. We derive our scheme from elementary first principles. Our scheme is simple and it
only requires access to uniform random number generator, instead of costly sampling and
transformations needed by other methods. The hashing procedure is different from traditional schemes and could be of independent interest in itself. Our scheme naturally provide
the quantification of when and how much savings we can obtain compared to existing
methodologies.
4. Weighted Minwise sampling is a fundamental subroutine in many celebrated approximation algorithms. Some of the immediate consequences of our proposal are as follows:
? We obtain an algorithmic improvement, over the query time of LSH based algorithm,
for L1 distance and Jaccard Similarity search.
? We reduce the kernel feature [21] computation time with min-max kernels [15].
? We reduce the sketching time for fast estimation of a variety of measures, including
L1 and earth mover distance [14, 5].
2
Review: Ioffe?s Algorithm and Fast Unweighted Minwise Hashing
We briefly review the state-of-the-art methodAlgorithm 1 Ioffe?s CWS [13]
ologies for Weighted Minwise Hashing (WMH).
Since WMH is only defined for weighted sets, input Vector x, random seed[][]
our vectors under consideration will always be
for i = 1 to k do
positive, i.e. every xi ? 0. D will denote the difor Iterate over x j s.t x j > 0 do
mensionality of the data, and we will use d to
randomseed = seed[i][ j];
denote the number (or the average) of non-zeros
Sample ri, j , ci, j ? Gamma(2, 1).
of the vector(s) under consideration.
Sample
f orm(0, 1)
?i, j ? Uni
The fastest known scheme for exact weighted
log x j
t j = ri, j + ?i, j
minwise hashing is based on an elegant derivay j = exp(ri, j (t j ? ?i, j ))
tion of the exact sampling process for ?Consisz j = y j ? exp(ri, j )
tent Weighted Sampling" (CWS) due to Ioffe [13],
a j = ci, j /z j
which is summarized in Algorithm 1. This
end for
scheme requires O(d) computations.
k? = arg min j a j
O(d) for a single hash computation is quite exHashPairs[i] = (k? , tk? )
pensive. Even the unweighted case of minwise
end for
hashing had complexity O(d) per hashes, until
RETURN HashPairs[]
recently. [26, 27] showed a new one permutation
based scheme for generating k near-independent unweighted minwise hashes in O(d + k)
breaking the old O(dk) barrier. However, this improvement does not directly extend to the
weighted case. Nevertheless, it leads to a very powerful heuristic in practice.
It was known that with some bias, weighted Algorithm 2 Reduce to Unweighted [11]
minwise sampling can be reduced to an un- input Vector x,
weighted minwise sampling using the idea of
S =?
sampling weights in proportion to their probfor Iterate over x j s.t x j > 0 do
abilities [10, 14]. Algorithm 2 describes such a
f loorx j = bx j c
procedure. A reasonable idea is then to use the
for i = 1 to f loorx j do
fast unweighted hashing scheme, on the top of
S = S ? (i, j)
this biased approximation [11, 24]. The inside
end for
for-loop in Algorithm 2 blows up the number of
r = Uni f orm(0, 1)
non-zeros in the returned unweighted set. This
if r ? x j ? f loorx j then
makes the process slower and dependent on the
S = S ? ( f loorx j + 1, j)
magnitude of weights. Moreover, unweighted
end if
sampling requires very costly random permutaend for
tions for good accuracy [20].
RETURN S (unweighted set)
Both the Ioffe?s scheme and the biased unweighted approximation scheme generate big hash values requiring 32-bits or higher storage
per hash value. For reducing this to a manageable size of say 4-8 bits, a commonly adopted
practical methodology is to randomly rehash it to smaller space at the cost of loss in accuracy [16]. It turns out that our hashing scheme generates 5-9 bits values, h(x), satisfying
Equation 2, without losing any accuracy, for many real datasets.
3
3
Our Proposal: New Hashing Scheme
We first describe our procedure in details. We will later talk about the correctness of the
scheme. We will then discuss its runtime complexity and other practical issues.
3.1 Procedure
We will denote the ith component of vector x ?
RD by xi . Let mi be the upper bound on the value
of component xi in the given dataset. We can always assume the mi to be an integer, otherwise
we take the ceiling dmi e as our upper bound. Define
i
D
X
X
mi = Mi . and
mi = MD = M (3)
k=1
x
??
?? = ?
y
??
??
?? = ?
??
??
??
??
k=1
??
?? = ?
??
?? = ?
?? ? ?
??
??
?? = ?
Figure 1: Illustration of Red-Green Map
If the data is normalized, then mi = 1 and M = D. of 4 dimensional vectors x and y.
Given a vector x, we first create a red-green map associated with it, as shown in Figure 1.
For this, we first take an interval [0, M] and divide it into D disjoint intervals, with ith
PD
mi = M, so we can always
interval being [Mi?1 , Mi ] which is of the size mi . Note that i=1
do that. We then create two regions, red and green. For the ith interval [Mi?1 , Mi ], we mark
the subinterval [Mi?1 , Mi?1 + xi ] as green and the rest [Mi?1 + xi , Mi ] with red, as shown in
Figure 1. If xi = 0 for some i, then the whole ith interval [Mi?1 , Mi ] is marked as red.
Formally, for a given vector x, define the green xgreen and the red xred regions as follows
D
xgreen = ?i=1
[Mi , Mi + xi ];
D
xred = ?i=1
[Mi + xi , Mi+1 ];
(4)
Our sampling procedure simply draws an inde- Algorithm 3 Weighted MinHash
pendent random real number between [0, M], if input Vector x, Mi ?s, k, random seed[].
the random number lies in the red region we
Initialise Hashes[] to all 0s.
repeat and re-sample. We stop the process as
for i = 1 to k do
soon as the generated random number lies in
randomseed = seed[i];
the green region. Our hash value for a given
while true do
data vector, h(x), is simply the number of steps
r = M ? Uni f orm(0, 1);
taken before we stop. We summarize the proceif ISGREEN(r), (check if r ? xred
dure in Algorithm 3. More formally,
then
Definition 1 Define {ri : i = 1, 2, 3....} as a sebreak;
quence of i.i.d uniformly generated random number
end if
between [0, M]. Then we define the hash of x, h(x) as
randomseed = dr ? 1000000e ;
Hashes[i] + +;
h(x) = arg min ri , s.t. ri ? xgreen
(5)
i
end while
end for
Our procedure can be viewed as a form of rejecRETURN Hashes
tion sampling [30]. To the best of our knowledge,
there has been no prior evidence in the literature, where that the number of samples rejected
has locality sensitive property.
We want our hashing scheme to be consistent [13] across different data points to guarantee
Equation 2. This requires ensuring the consistency of the random numbers in hashes [13].
We can achieve the required consistency by pre-generating the sequence of random numbers
and storing them analogous to other hashing schemes. However, there is an easy way to
generate a fixed sequence of random numbers on the fly by ensuring the consistency of the
random seed. This does not require any storage, except the starting seed. Our Algorithm 3
uses this criterion, to ensure the consistency of random numbers. We start with a fixed
random seed for generating random numbers. If the generated random number lies in the
red region, then before re-sampling, we reset the seed of our random number generator as a
function of discarded random number. In the algorithm, we used d100000 ? re, where de is
the ceiling operation, as a convenient way to ensure the consistency of sequence, without
any memory overhead. This seems to works nicely in practice. Since we are sampling real
numbers, the probability of any repetition (or cycle) is zero. For generating k independent
hashes we just use different random seeds which are kept fixed for the entire dataset.
4
3.2 Correctness
We show that the simple, but very unusual, scheme given in Algorithms 3 actually does
possess the required property, i.e. for any pair of points x and y Equation 2 holds. Unlike
the previous works on this line [17, 13] which requires computing the exact distribution of
associated quantities, the proof of our proposed scheme is elementary and can be derived
from first principles. This is not surprising given the simplicity of our procedure.
Theorem 1 For any two vectors x and y, we have
PD
min{xi , yi }
Pr h(x) = h(y) = J(x, y) = PDi=1
i=1 max{xi , yi }
(6)
Theorem 1 implies that the sampling process is exact and we automatically have an unbiased
estimator of J(x, y), using k independently generated WMH, hi (x)s from Algorithm 3.
k
1 X
J? =
1{hi (x) = hi (y)} ;
k i=1
? = J(x, y);
E( J)
? =
Var( J)
J(x, y)(1 ? J(x, y))
,
k
(7)
where 1 is the indicator function.
3.3 Running Time Analysis and Fast Implementation
Define
PD
Size of green region
xi ||x||1
sx =
= i=1 =
,
(8)
Size of red region + Size of green region
M
M
as the effective sparsity of the vector x. Note that this is also the probability of Pr(r ? xgreen ).
Algorithm 3 has a while loop.
We show that the expected times the while loops runs, which is also the expected value of
h(x), is the inverse of effective sparsity . Formally,
Theorem 2
E(h(x)) =
1
1 ? sx
;
; Var(h(x)) =
sx
s2x
Pr h(x) ?
log ?
? ?.
log (1 ? s x )
(9)
3.4 When is this advantageous over Ioffe?s scheme?
The time to compute each hash value, in expectation, is the inverse of effective sparsity
1
s . This is a very different quantity compared to existing solutions which needs O(d). For
datasets with 1s << d, we can expect our method to be much faster. For real datasets, such as
image histograms, where minwise sampling is popular[13], the value of this sparsity is of
the order of 0.02-0.08 (see Section 4.2) leading to s1x ? 13 ? 50. On the other hand, the number
of non-zeros is around half million. Therefore, we can expect significant speed-ups.
Corollary 1 The expected amount of bits required to represent h(x) is small, in particular,
E(bits) ? ? log s x ; E(bits) ? log
1 (1 ? s x )
?
;
sx
2
(10)
Existing hashing scheme require 64 bits, which is quite expensive. A popular approach for
reducing space uses least significant bits of hashes [16, 13]. This tradeoff in space comes at
the cost of accuracy [16]. Our hashing scheme naturally requires only few bits, typically 5-9
(see Section 4.2), eliminating the need for trading accuracy for manageable space.
We know from Theorem 2 that each hash function computation requires 1s number of
function calls to ISGREEN(r). If we can implement ISGREEN(r) in constant time, i.e O(1),
then we can generate generate k independent hashes in total O(d + ks ) time instead of O(dk)
required by [13]. Note that O(d) is the time to read the input vector which cannot be avoided.
Once the data is loaded into the memory, our procedure is actually O( ks ) for computing k
hashes, for all k ? 1. This can be a huge improvement as in many real scenarios 1s d
5
Before we jump into a constant time implementation of ISGREEN(r), we would like readers
to note that there is a straightforward binary search algorithm for ISGREEN(r) in log d time.
We consider d intervals [Mi , Mi + xi ] for all i, such that xi , 0. Because of the nature of the
problem, Mi?1 + xi?1 ? Mi ?i. Therefore, these intervals are disjoint and sorted. Therefore,
D
given a random number r, determining if r ? ?i=1
[Mi , Mi + xi ] only needs binary search over
d ranges. Thus, in expectation, we already have a scheme that generates k independent
hashes in total O(d + ks log d) time improving over best known O(dk) required by [13] for
exact unbiased sampling, whenever d 1s .
We show that with some algorithmic tricks Algorithm 4 ComputeHashMaps (Once per
and few more data structures, we can imple- dataset)
ment ISGREEN(r) in constant time O(1). We input Mi ?s,
need two global pre-computed hashmaps,
index =0, CompToM[0] =0
IntToComp (Integer to Vector Component)
for i = 0 to D ? 1 do
and CompToM (Vector Component to M
if i < D ? 1 then
value). IntToComp is a hashmap that maps
CompT oM[i + 1] = Mi + CompT oM[i]
every integer between [0, M] to the assoend if
ciated components, i.e., all integers befor j = 0 to Mi ? 1 do
tween [Mi , Mi+1 ] are mapped to i, because
IntT oComp[index] = i
index++
it is associated with ith component. Compend for
ToM maps every component of vectors i ?
end for
{1, 2, 3, ..., D} to its associated value Mi . The
RETURN CompToM[] and IntToComp[]
procedure for computing these hashmaps
is straightforward and is summarized in Algorithm 4. It should be noted that these hash-maps computation is a one time pre-processing
operation over the entire dataset having a negligible cost. Mi ?s can be computed (estimated)
while reading the data.
Using these two pre-computed hashmaps, the Algorithm 5 ISGREEN(r)
ISGREEN(r) methodology works as follows: We input r, x, Hashmaps IntToComp[] and
first compute the ceiling of r, i.e. dre, then
CompToM[] from Algorithm 4.
we find the component i associated with r, i.e.,
index = dre
r ? [Mi , Mi+1 ], and the corresponding associated
i = IntT oComp[index]
Mi using hashmaps IntToComp and CompToM. FiMi = CompT oM[i]
nally, we return true if r ? xi + Mi otherwise we
if r ? Mi + xi then
return false. The main observation is that since
RETURN TRUE
we ensure that all Mi ?s are Integers, for any real
end
if
number r, if r ? [Mi , Mi+1 ] then the same holds
RETURN FALSE
for dre, i.e., dre ? [Mi , Mi+1 ]. Hence we can work
with hashmaps using dre as the key. The overall procedure is summarized in Algorithm 5.
Note that our overall procedure is much simpler compared to Algorithm 1. We only need to
generate random numbers followed by a simple condition check using two hash lookups.
Our analysis shows that we have to repeat this only for small number of times. Compare
this with the scheme of Ioffe where for every non-zero component of a vector we need to
sample two Gamma variables followed by computing several expensive transformations
including exponentials. We next demonstrate the benefits of our approach in practice.
4
Experiments
In this section, we demonstrate that in real high-dimensional settings, our proposal provides
significant speedup and requires less memory over existing methods. We also need to
validate our theory that our scheme is unbiased and should be indistinguishable in accuracy
with Ioffe?s method.
Baselines: Ioffe?s method is the fastest known exact method in the literature, so it serves
as our natural baseline. We also compare our method with biased unweighted approximations (see Algorithm 2) which capitalizes on recent success in fast unweighted minwise
hashing [26, 27], we call it Fast-WDOPH (for Fast Weighted Densified One Permutation
Hashing). Fast-WDOPH needs very long permutation, which is expensive. For efficiency,
6
0.25
0.15
0.1
Sim=0.8
0.05
0.3
Proposed
Fast-WDOPH
Ioffe
0.2
0.15
Sim=0.72
0.1
0.05
0
0
10
20
30
40
50
20
30
40
10
0
30
40
Number of Hashes
50
30
40
50
0.4
Proposed
Fast-WDOPH
Ioffe
0.2
Sim=0.44
0.1
0
20
20
Number of Hashes
Average Error
Average Error
Proposed
Fast-WDOPH
Ioffe
0.1
10
Sim=0.61
0.1
50
0.3
0.2
0.2
Number of Hashes
0.3
Sim=0.56
Proposed
Fast-WDOPH
Ioffe
0
10
Number of Hashes
Average Error
Average Error
Proposed
Fast-WDOPH
Ioffe
Average Error
Average Error
0.2
Proposed
Fast-WDOPH
Ioffe
0.3
Sim=0.27
0.2
0.1
0
10
20
30
40
Number of Hashes
50
10
20
30
40
50
Number of Hashes
Figure 2: Average Errors in Jaccard Similarity Estimation with the Number of Hash Values. Estimates are averaged over 200 repetitions.
we implemented the permutation using fast 2-universal hashing which is always recommended [18].
Datasets: Weighted Minwise samData
non-zeros (d) Dim (D) Sparsity
pling is commonly used for sketch(s)
ing image histograms [13]. We
Hist
737
768
0.081
chose two popular publicly availCaltech101
95029
485640
0.024
able vision dataset Caltech101 [9]
Oxford
401879
580644
0.086
and Oxford [19]. We used the stanTable 1: Basic Statistics of the Datasets
dard publicly available Histogram
of Oriented Gradient (HOG) codes [6], popular in vision task, to convert images into feature vectors. In addition, we also used random web images [29] and computed simple
histograms of RGB values. We call this dataset as Hist. The statistics of these datasets
is summarized in Table 1. These datasets cover a wide range of variations in terms of
dimensionality, non-zeros and sparsity.
4.1
Comparing Estimation Accuracy
In this section, we perform a sanity
Method
Prop
Ioffe
Fastcheck experiment and compare the esWDOPH
timation accuracy with WMH. For this
Hist
10ms
986ms
57ms
task we take 9 pairs of vectors from
Caltech101 57ms 87105ms 268ms
our datasets with varying level of simOxford
11ms 746120ms 959ms
ilarities. For each of the pair (x, y), we
generate k weighted minwise hashes Table 2: Time taken in milliseconds (ms) to comhi (x) and hi (y) for i ? {1, 2, .., k}, us- pute 500 hashes by different schemes. Our proing the three competing schemes. We posed scheme is significantly faster.
then compute the estimate of the JacP
card similarity J(x, y) using the formula 1k ki=1 1{hi (x) = hi (y)} (See Equation 7). We compute
the errors in the estimate as a function of k. To minimize the effect of randomization, we
average the errors from 200 random repetitions with different seeds. We plot this average
error with k = {1, 2, ..., 50} in Figure 2 for different similarity levels.
We can clearly see from the plots that the accuracy of the proposed scheme is indistinguishable from Ioffe?s scheme. This is not surprising because both the schemes are unbiased and
have the same theoretical distribution. This validates Theorem 1
The accuracy of Fast-WDOPH is inferior to that of the other two unbiased schemes and
sometimes its performance is poor. This is because the weighted to unweighted reduction
is biased and approximate. The bias of this reduction depends on the vector pairs under
consideration, which can be unpredictable.
7
4.2 Speed Comparisons
We compute the average time (in milliseconds) taken by the competing algorithms to
compute 500 hashes of a given data vector for all the three datasets. Our experiments were
coded in C# on Intel Xenon CPU with 256 GB RAM. Table 2 summarises the comparison.
We do not include the data loading cost in these numbers and assume that the data is in the
memory for all the three methodologies.
Hist
Caltech101 Oxford
We can clearly see tremendous
speedup over Ioffe?s scheme. For
Mean Values 11.94
52.88
9.13
Hist dataset with mere 768 nonHash Range
[1,107]
[1,487]
[1,69]
zeros, our scheme is 100 times
Bits Needed 7
9
7
faster than Ioffe?s scheme and
around 5 times faster than Fast- Table 3: The range of the observed hash values, using
WDOPH approximation. While the proposed scheme, along with the maximum bits
on caltech101 and Oxford datasets, needed per hash value. The mean hash values agrees
which are high dimensional and with Theorem 2
dense datasets, our scheme can be 1500x to 60000x faster than Ioffe?s scheme, while it is
around 5 to 100x times faster than Fast-WDOPH scheme. Dense datasets like Caltech101 and
Oxford represent more realistic scenarios. These features are taken from real applications [6]
and such level of sparsity and dimensionality are more common in practice.
The results are not surprising because Ioffe?s scheme is very slow O(dk). Moreover, the
constant are inside bigO is also large, because of complex transformations. Therefore, for
datasets with high values of d (non-zeros) this scheme is very slow. Similar phenomena
were observed in [13], that decreasing the non-zeros by ignoring non-frequent dimensions
can be around 150 times faster. However, ignoring dimension looses accuracy.
4.3 Memory Comparisons
Table 3 summarizes the range of the hash values and the maximum number of bits needed
to encode these hash values without any bias. We can clearly see that the hash values, even
for such high-dimensional datasets, only require 7-9 bits. This is a huge saving compared
to existing hashing schemes which requires (32-64) bits [16]. Thus, our method leads to
around 5-6 times savings in space. The mean values observed (Table 3) validate the formula
in Theorem 2.
5
Discussions
PD
x
Theorem 2 shows that the quantity s x = PDi=1 mi determines the runtime. If s x is very very
i=1 i
small then, although the running time is constant (independent of d or D), it can still make
the algorithm unnecessarily slow. Note that for the algorithm to work we choose Mi to be
the largest integer greater than the maximum possible value of co-ordinate i in the given
dataset. If this integer gap is big then we unnecessarily increase the running time. Ideally,
the best running time is obtained when the maximum value, is itself an integer, or is very
close to its ceiling value. If all the values are integers, scaling up does not matter, as it does
not change s x , but scaling down can make s x worse. Ideally we should scale, such that,
? = arg max? =
PD
?xi
PDi=1
i=1 d?mi e
is maximized, where mi is the maximum value of co-ordinate i.
5.1 Very Sparse Datasets
For very sparse datasets, the information is more or less in the sparsity pattern rather than in
the magnitude [28]. Binarization of very sparse dataset is a common practice and densified
one permutation hashing [26, 27] provably solves the problem in O(d + k). Nevertheless, for
applications when the data is extremely sparse, and the magnitude of component seems
crucial, binary approximations followed by densified one permutation hashing (Fast-DOPH)
should be the preferred method. Ioeffe?s scheme is preferable, dues to its exactness nature,
when number the number of non-zeros is of the order of k.
6
Acknowledgements
This work is supported by Rice Faculty Initiative Award 2016-17. We would like to thank
anonymous reviewers, Don Macmillen, and Ryan Moulton for feedbacks on the presentation
of the paper.
8
References
[1] R. J. Bayardo, Y. Ma, and R. Srikant. Scaling up all pairs similarity search. In WWW, pages 131?140, 2007.
[2] A. Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21?29,
Positano, Italy, 1997.
[3] A. Z. Broder. Filtering near-duplicate documents. In FUN, Isola d?Elba, Italy, 1998.
[4] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig. Syntactic clustering of the web. In WWW, pages 1157 ? 1166,
Santa Clara, CA, 1997.
[5] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages 380?388, Montreal, Quebec,
Canada, 2002.
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition,
volume 1, pages 886?893. IEEE, 2005.
[7] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokn. Locality-sensitive hashing scheme based on p-stable distributions. In
SCG, pages 253 ? 262, Brooklyn, NY, 2004.
[8] C. Dwork and A. Roth. The algorithmic foundations of differential privacy.
[9] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian
approach tested on 101 object categories. Computer Vision and Image Understanding, 106(1):59?70, 2007.
[10] S. Gollapudi and R. Panigrahy. Exploiting asymmetry in hierarchical topic extraction. In Proceedings of the 15th ACM international conference on Information and knowledge management, pages 475?482. ACM, 2006.
[11] B. Haeupler, M. Manasse, and K. Talwar. Consistent weighted sampling made fast, small, and easy. Technical report,
arXiv:1410.4266, 2014.
[12] P. Indyk. A small approximately min-wise independent family of hash functions. Journal of Algorithms, 38(1):84?90, 2001.
[13] S. Ioffe. Improved consistent sampling, weighted minhash and L1 sketching. In ICDM, pages 246?255, Sydney, AU, 2010.
[14] J. Kleinberg and E. Tardos. Approximation algorithms for classification problems with pairwise relationships: Metric labeling and Markov random fields. In FOCS, pages 14?23, New York, 1999.
[15] P. Li. 0-bit consistent weighted sampling. In KDD, 2015.
[16] P. Li and A. C. K?nig. Theory and applications b-bit minwise hashing. Commun. ACM, 2011.
[17] M. Manasse, F. McSherry, and K. Talwar. Consistent weighted sampling. Technical Report MSR-TR-2010-73, Microsoft
Research, 2010.
[18] M. Mitzenmacher and S. Vadhan. Why simple hash functions work: exploiting the entropy in a data stream. In Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms, pages 746?755. Society for Industrial and Applied
Mathematics, 2008.
[19] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman. Object retrieval with large vocabularies and fast spatial matching.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[20] M. P?atra?scu and M. Thorup. On the k-independence required by linear probing and minwise independence. In ICALP,
pages 715?726, 2010.
[21] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems,
pages 1177?1184, 2007.
[22] A. Rajaraman and J. Ullman. Mining of Massive Datasets.
[23] Z. Rasheed and H. Rangwala. Mc-minh: Metagenome clustering using minwise based hashing. SIAM.
[24] P. Sadosky, A. Shrivastava, M. Price, and R. C. Steorts. Blocking methods applied to casualty records from the syrian conflict.
arXiv preprint arXiv:1510.07714, 2015.
[25] A. Shrivastava. Probabilistic Hashing Techniques For Big Data. PhD thesis, Cornell University, 2015.
[26] A. Shrivastava and P. Li. Densifying one permutation hashing via rotation for fast near neighbor search. In ICML, Beijing,
China, 2014.
[27] A. Shrivastava and P. Li. Improved densification of one permutation hashing. In UAI, Quebec, CA, 2014.
[28] A. Shrivastava and P. Li. In defense of minhash over simhash. In Proceedings of the Seventeenth International Conference on
Artificial Intelligence and Statistics, pages 886?894, 2014.
[29] J. Wang, J. Li, D. Chan, and G. Wiederhold. Semantics-sensitive retrieval for digital picture libraries. D-Lib Magazine, 5(11),
1999.
[30] Wikipedia. https://en.wikipedia.org/wiki/Rejection_sampling.
9
| 6472 |@word msr:1 briefly:1 manageable:2 eliminating:1 proportion:2 seems:2 advantageous:1 loading:1 faculty:1 compression:1 dalal:1 triggs:1 rajaraman:1 scg:1 rgb:1 tr:1 reduction:2 necessity:1 celebrated:2 series:1 document:3 existing:9 nally:1 comparing:2 surprising:3 clara:1 realistic:1 kdd:1 cheap:1 designed:1 plot:2 hash:53 half:1 fewer:1 generative:1 isard:1 intelligence:1 capitalizes:1 ith:5 record:1 provides:1 org:1 simpler:1 along:1 differential:2 symposium:1 replication:4 initiative:1 focs:1 overhead:1 inside:2 privacy:2 pairwise:1 expected:3 blowup:1 frequently:1 relying:1 decreasing:1 automatically:1 cpu:1 unpredictable:1 lib:1 spain:1 moreover:4 loos:1 transformation:3 guarantee:1 every:4 fun:1 runtime:2 exactly:2 preferable:1 positive:3 before:3 negligible:1 consequence:1 oxford:5 datar:1 approximately:1 chose:1 au:1 k:3 china:1 co:2 fastest:3 limited:1 range:8 averaged:1 seventeenth:1 practical:3 practice:9 implement:1 procedure:11 universal:1 significantly:2 convenient:1 ups:1 pre:4 matching:1 orm:3 cannot:2 close:1 scu:1 storage:2 www:2 equivalent:1 map:6 reviewer:1 roth:1 straightforward:2 starting:1 independently:1 simplicity:2 estimator:1 initialise:1 embedding:1 handle:2 coordinate:1 variation:1 analogous:1 compt:3 tardos:1 commercial:1 massive:2 exact:10 losing:2 magazine:1 us:2 agreement:1 trick:1 element:2 satisfying:2 expensive:6 recognition:2 pensive:1 blocking:1 observed:3 fly:1 preprint:1 wang:1 worst:1 thousand:3 region:8 cycle:1 pd:6 complexity:3 ideally:2 manasse:3 imple:1 signature:1 upon:1 efficiency:1 completely:1 tx:1 talk:1 fast:24 effective:5 describe:1 query:1 artificial:1 labeling:1 sanity:1 quite:2 heuristic:2 posed:1 nineteenth:1 say:1 otherwise:2 ability:1 statistic:3 syntactic:1 itself:2 validates:1 indyk:2 advantage:1 sequence:5 propose:1 ment:1 reset:1 frequent:1 yii:1 loop:3 densifying:1 achieve:1 validate:2 gollapudi:1 exploiting:2 requirement:2 asymmetry:1 generating:4 incremental:1 tk:1 tions:1 derive:1 object:2 montreal:1 pendent:1 sim:6 solves:1 sydney:1 implemented:1 strong:1 implies:1 trading:1 come:1 owing:1 human:1 dure:1 require:3 anonymous:1 randomization:2 elementary:2 ryan:1 hold:2 around:9 s2x:1 exp:2 seed:10 algorithmic:3 earth:1 estimation:5 sensitive:3 agrees:1 leftover:1 correctness:2 create:2 repetition:3 largest:1 weighted:28 hope:1 brought:1 anshumali:2 clearly:3 always:4 exactness:1 rather:1 avoid:1 cornell:1 varying:1 corollary:1 encode:1 derived:1 focus:1 improvement:4 quence:1 check:2 industrial:2 pdi:4 baseline:3 dim:1 dependent:1 typically:3 entire:2 perona:1 going:1 subroutine:2 semantics:1 provably:1 overall:3 arg:3 issue:1 classification:1 art:1 special:1 breakthrough:1 spatial:1 dmi:1 once:2 saving:4 nicely:1 having:1 sampling:29 extraction:1 field:1 unnecessarily:2 icml:1 report:2 duplicate:1 few:6 modern:1 randomly:1 oriented:2 gamma:2 resulted:1 mover:1 microsoft:1 detection:1 interest:1 huge:2 mining:2 dwork:1 evaluation:2 d100000:1 behind:1 mcsherry:1 old:2 divide:1 re:3 theoretical:1 cover:1 cost:6 hundred:2 uniform:1 successful:1 rounding:1 answer:1 casualty:1 recht:1 fundamental:2 randomized:3 broder:3 international:2 siam:2 probabilistic:1 sketching:2 thesis:1 management:1 choose:1 dr:1 worse:1 leading:3 return:7 bx:1 li:6 ullman:1 de:1 blow:1 lookup:1 ology:1 summarized:4 matter:1 notable:1 caused:1 depends:1 stream:1 later:4 tion:2 philbin:1 red:10 start:1 hashmap:1 contribution:1 om:3 minimize:1 publicly:2 accuracy:12 loaded:1 maximized:1 famous:1 bayesian:1 mere:1 mc:1 whenever:1 definition:1 naturally:2 associated:6 mi:50 proof:1 boil:1 stop:2 dataset:9 popular:5 knowledge:2 improves:1 dimensionality:2 carefully:1 actually:2 hashing:36 higher:2 methodology:5 tom:1 improved:2 zisserman:1 mitzenmacher:1 rejected:2 just:1 until:1 hand:1 sketch:1 web:3 resemblance:1 grows:1 effect:1 concept:1 unbiased:7 requiring:1 normalized:1 true:3 hence:1 read:1 deal:1 indistinguishable:2 inferior:1 noted:1 criterion:1 generalized:1 trying:1 m:10 demonstrate:2 l1:4 image:6 wise:1 consideration:3 novel:1 recently:3 common:2 rotation:1 wikipedia:2 volume:1 million:1 extend:1 significant:4 rd:2 consistency:5 mathematics:1 steorts:1 replicating:1 fingerprint:2 had:1 lsh:4 pute:1 access:1 stable:1 similarity:10 etc:2 pling:1 showed:3 recent:3 chan:1 italy:2 commun:1 scenario:2 binary:7 success:2 yi:7 exploited:1 greater:1 houston:1 isola:1 recommended:1 multiple:3 dre:5 rahimi:1 intt:2 ing:1 faster:12 technical:2 long:1 zweig:1 retrieval:2 icdm:1 award:1 coded:1 impact:1 ensuring:2 basic:1 moulton:1 vision:6 metric:3 expectation:2 arxiv:3 histogram:6 kernel:6 tailored:1 represent:2 sometimes:1 proposal:4 addition:1 want:1 interval:7 crucial:1 biased:6 rest:1 specially:1 unlike:3 posse:1 pass:1 nig:1 elegant:1 quebec:2 integer:11 call:3 vadhan:1 near:3 enough:1 minhash:3 easy:2 variety:2 independence:3 iterate:2 competing:2 reduce:5 idea:5 tradeoff:1 bottleneck:2 defense:1 gb:1 returned:1 york:1 repeatedly:1 collision:1 santa:1 amount:1 category:1 reduced:2 generate:8 http:1 wiki:1 exist:1 millisecond:2 srikant:1 chum:1 estimated:1 disjoint:2 per:5 xii:1 discrete:1 key:2 nevertheless:2 kept:1 groundbreaking:1 ram:1 bayardo:1 convert:3 beijing:1 run:1 inverse:2 talwar:2 powerful:1 family:1 reasonable:1 reader:1 draw:1 summarizes:1 jaccard:4 scaling:3 bit:19 bound:2 hi:7 ki:1 followed:3 fold:1 annual:1 fei:2 ri:7 generates:3 kleinberg:1 speed:2 min:8 extremely:1 speedup:4 department:1 charikar:1 poor:1 smaller:3 describes:1 across:1 making:1 tent:1 pr:4 indexing:1 taken:4 ceiling:4 resource:1 equation:5 turn:1 discus:1 mechanism:1 needed:4 know:1 end:9 unusual:1 capitalizing:2 adopted:2 serf:1 operation:2 incurring:1 available:1 thorup:1 hierarchical:1 slower:1 top:1 running:5 ensure:3 include:1 clustering:2 especially:1 ciated:1 society:1 summarises:1 question:2 realized:1 quantity:3 already:1 costly:5 md:1 traditional:1 gradient:2 distance:3 thank:1 mapped:1 card:1 topic:1 panigrahy:1 cws:4 code:1 index:5 relationship:1 illustration:1 stoc:1 hog:1 implementation:3 design:1 perform:1 upper:2 observation:1 datasets:23 discarded:1 markov:1 minh:1 immediate:1 canada:1 ordinate:2 introduced:1 pair:6 required:10 glassman:1 sivic:1 conflict:1 tremendous:1 barcelona:1 nip:1 brooklyn:1 able:3 usually:1 pattern:3 sparsity:9 summarize:1 reading:1 green:9 max:6 including:2 memory:5 natural:2 quantification:1 indicator:1 scheme:56 inversely:1 library:1 picture:1 binarization:1 prior:2 literature:3 review:2 acknowledgement:1 understanding:1 determining:1 loss:1 expect:2 permutation:9 icalp:1 sublinear:1 generation:1 inde:1 proportional:2 filtering:1 mensionality:1 var:2 generator:2 digital:1 foundation:1 consistent:6 principle:2 storing:1 caltech101:5 repeat:2 last:1 soon:1 supported:1 bias:4 wide:2 neighbor:1 simhash:1 barrier:1 sparse:4 benefit:1 feedback:1 dimension:2 vocabulary:1 unweighted:18 author:1 commonly:5 jump:1 dard:1 avoided:1 made:1 approximate:4 uni:3 preferred:1 gene:1 global:1 hist:5 ioffe:26 uai:1 containment:1 xi:24 fergus:1 don:1 search:7 un:1 decade:1 timation:1 why:1 table:6 nature:2 ca:2 ignoring:2 shrivastava:6 improving:2 subinterval:1 complex:1 tween:1 significance:1 main:2 dense:2 big:5 whole:1 positano:1 positively:1 syrian:1 intel:1 en:1 deployed:1 slow:3 ny:1 probing:1 exponential:1 lie:3 breaking:1 rangwala:1 down:3 theorem:8 formula:2 densification:2 dk:5 evidence:1 false:2 ci:2 phd:1 magnitude:5 labelling:1 sx:4 gap:1 rejection:1 locality:2 entropy:1 simply:2 visual:1 determines:1 acm:4 rice:3 prop:1 ma:1 marked:1 viewed:1 sorted:1 presentation:1 replace:1 price:1 change:1 typical:1 except:1 reducing:3 uniformly:1 total:2 experimental:2 formally:3 immorlica:1 mark:1 minwise:32 tested:1 phenomenon:1 correlated:1 |
6,050 | 6,473 | Incremental Variational Sparse Gaussian Process
Regression
Ching-An Cheng
Institute for Robotics and Intelligent Machines
Georgia Institute of Technology
Atlanta, GA 30332
cacheng@gatech.edu
Byron Boots
Institute for Robotics and Intelligent Machines
Georgia Institute of Technology
Atlanta, GA 30332
bboots@cc.gatech.edu
Abstract
Recent work on scaling up Gaussian process regression (GPR) to large datasets has
primarily focused on sparse GPR, which leverages a small set of basis functions
to approximate the full Gaussian process during inference. However, the majority
of these approaches are batch methods that operate on the entire training dataset
at once, precluding the use of datasets that are streaming or too large to fit into
memory. Although previous work has considered incrementally solving variational
sparse GPR, most algorithms fail to update the basis functions and therefore
perform suboptimally. We propose a novel incremental learning algorithm for
variational sparse GPR based on stochastic mirror ascent of probability densities
in reproducing kernel Hilbert space. This new formulation allows our algorithm
to update basis functions online in accordance with the manifold structure of
probability densities for fast convergence. We conduct several experiments and
show that our proposed approach achieves better empirical performance in terms of
prediction error than the recent state-of-the-art incremental solutions to variational
sparse GPR.
1
Introduction
Gaussian processes (GPs) are nonparametric statistical models widely used for probabilistic reasoning
about functions. Gaussian process regression (GPR) can be used to infer the distribution of a latent
function f from data. The merit of GPR is that it finds the maximum a posteriori estimate of
the function while providing the profile of the remaining uncertainty. However, GPR also has
drawbacks: like most nonparametric learning techniques the time and space complexity of GPR
scale polynomially with the amount of training data. Given N observations, inference of GPR
involves inverting an N ? N covariance matrix which requires O(N 3 ) operations and O(N 2 ) storage.
Therefore, GPR for large N is infeasible in practice.
Sparse Gaussian process regression is a pragmatic solution that trades accuracy against computational complexity. Instead of parameterizing the posterior using all N observations, the idea is
to approximate the full GP using the statistics of finite M N function values and leverage the
induced low-rank structure to reduce the complexity to O(M 2 N + M 3 ) and the memory to O(M 2 ).
? = {?
Often sparse GPRs are expressed in terms of the distribution of f (?
xi ), where X
xi ? X } M
i=1 are
called inducing points or pseudo-inputs [21, 23, 18, 26]. A more general representation leverages the
information about the inducing function (Li f )(?
xi ) defined by indirect measurement of f through a
bounded linear operator Li (e.g. integral) to more compactly capture the full GP [27, 8]. In this work,
we embrace the general notion of inducing functions, which trivially includes f (?
xi ) by choosing Li
? to denote the parameters
to be identity. With abuse of notation, we reuse the term inducing points X
that define the inducing functions.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Learning a sparse GP representation in regression can be summarized as inference of the hyperparameters, the inducing points, and the statistics of inducing functions. One approach to learning is
to treat all of the parameters as hyperparameters and find the solution that maximizes the marginal
likelihood [21, 23, 18]. An alternative approach is to view the inducing points and the statistics of
inducing functions as variational parameters of a class of full GPs, to approximate the true posterior of
f , and solve the problem via variational inference, which has been shown robust to over-fitting [26, 1].
All of the above methods are designed for the batch setting, where all of the data is collected in
advance and used at once. However, if the training dataset is extremely large or the data are streaming
and encountered in sequence, we may want to incrementally update the approximate posterior of the
latent function f . Early work by Csat? and Opper [6] proposed an online version of GPR, which
greedily performs moment matching of the true posterior given one sample instead of the posterior of
all samples. More recently, several attempts have been made to modify variational batch algorithms
to incremental algorithms for learning sparse GPs [1, 9, 10]. Most of these methods rely on the
fact that variational sparse GPR with fixed inducing points and hyperparameters is equivalent to
inference of the conjugate exponential family: Hensman et al. [9] propose a stochastic approximation
of the variational sparse GPR problem [26] based on stochastic natural gradient ascent [11]; Hoang
et al. [10] generalizes this approach to the case with general Gaussian process priors. Unlike the
original variational algorithm for sparse GPR [26], which finds the optimal inducing points and
hyperparameters, these algorithms only update the statistics of the inducing functions fX? .
In this paper, we propose an incremental learning algorithm for variational sparse GPR, which
we denote as iVSGPR. Leveraging the dual formulation of variational sparse GPR in reproducing
kernel Hilbert space (RKHS), iVSGPR performs stochastic mirror ascent in the space of probability
densities [17] to update the approximate posterior of f , and stochastic gradient ascent to update the
hyperparameters. Stochastic mirror ascent, similar to stochastic natural gradient ascent, considers the
manifold structure of probability functions and therefore converges faster than the naive gradient approach. In each iteration, iVSGPR solves a variational sparse GPR problem of the size of a minibatch.
As a result, iVSGPR has constant complexity per iteration and can learn all the hyperparameters, the
inducing points, and the associated statistics online.
2
Background
In this section, we provide a brief summary of Gaussian process regression and sparse Gaussian
process regression for efficient inference before proceeding to introduce our incremental algorithm
for variational sparse Gaussian process regression in Section 3.
2.1
Gaussian Processes Regression
Let F be a family of real-valued continuous functions f : X 7? R. A GP is a distribution of
functions f in F such that, for any finite set X ? X , {f (x)|x ? X} is Gaussian distributed
N (f (x)|m(x), k(x, x0 )): for any x, x0 ? X , m(x) and k(x, x0 ) represent the mean of f (x) and the
covariance between f (x) and f (x0 ), respectively. In shorthand, we write f ? GP(m, k).
The mean m(x) and the covariance k(x, x0 ) (the kernel function) are often parametrized by a set of
hyperparameters which encode our prior belief of the unknown function f . In this work, for simplicity,
we assume that m(x) = 0 and the kernel can be parameterized as k(x, x0 ) = ?2 gs (x, x0 ), where
gs (x, x0 ) is a positive definite kernel, ?2 is a scaling factor and s denotes other hyperparameters [20].
The objective of GPR is to infer the posterior probability of the function f given data D =
{(xi , yi )}N
i=1 . In learning, the function value f (xi ) is treated as a latent variable and the observation yi = f (xi ) + i is modeled as the function corrupted by i.i.d. noise i ? N (|0, ? 2 ). Let
X = {xi }N
i=1 . The posterior probability distribution p(f |y) can be compactly summarized as
GP(m|D , k|D ):
m|D (x) = kx,X (KX + ? 2 I)?1 y
0
(1)
2
?1
k|D (x, x ) = kx,x0 ? kx,X (KX + ? I)
kX,x0
(2)
N
1?N
where y = (yi )N
denotes the vector of the cross-covariance between x and X,
i=1 ? R , kx,X ? R
N ?N
and KX ? R
denotes the empirical covariance matrix of the training set. The hyperparameters
2
? := (s, ?, ?) in the GP are learned by maximizing the log-likelihood of the observation y
max log p(y) = max log N (y|0, KX + ? 2 I).
?
2.2
?
(3)
Sparse Gaussian Processes Regression
A straightforward approach to sparse GPR is to approximate the GP prior of interest with a degenerate
GP [21, 23, 18]. Formally, for any xi , xj ? X , it assumes that
f (xi ) ? yi |fX? ,
f (xi ) ? f (xj )|fX? ,
(4)
M
where fX? denotes ((Li f )(?
xi ))i=1 and ? denotes probabilistic independence between two random
variables. That is, the original empirical covariance matrix KX is replaced by a rank-M approxi? X := K ? K ?1 K ? , where K ? is the covariance of f ? and K ? ? RN ?M is the
mation K
?
X,X X
X,X
X
X
X,X
? are
cross-covariance between fX and fX? . Let ? ? RN ?N be diagonal. The inducing points X
treated as hyperparameters and can be found by jointly maximizing the log-likelihood with ?
? X + ? 2 I + ?),
max log N (y|0, K
?
?,X
(5)
Several approaches to sparse GPR can be viewed as special cases of this problem [18]: the Deterministic Training Conditional (DTC) [21] approximation sets ? as zero. To heal the degeneracy in p(fX ),
the Fully Independent Training Conditional (FITC) approximation [23] includes heteroscedastic
? X ). As a result, their sum ? + K
? X matches the true covariance
noise, setting ? = diag(KX ? K
KX in the diagonal term. This general maximum likelihood scheme for finding the inducing points
is adopted with variations in [24, 27, 8, 2]. A major drawback of all of these approaches is that they
? in the prior parametrization [26].
can over-fit due to the high degrees-of-freedom X
Variational sparse GPR can alternatively be formulated to approximate the posterior of the latent
function by a full GP parameterized by the inducing points and the statistics of inducing functions [1,
26]. Specifically, Titsias [26] proposes to use
q(fX , fX? ) = p(fX |fX? )q(fX? )
(6)
? is the Gaussian approximation of p(f ? |y)
to approximate p(fX , fX? |y), where q(fX? ) = N (fX? |m,
? S)
X
?1
?
and p(fX |fX? ) = N (fX |KX,X? KX? fX? , KX ? KX ) is the conditional probability in the full GP. The
novelty here is that q(fX , fX? ), despite parametrization by finite parameters, is still a full GP, which,
unlike its predecessor [21], can be infinite-dimensional.
The inference problem of variational sparse GPR is solved by minimizing the KL-divergence
KL[q(fX , fX? )||p(fX , fX? |y)]. In practice, the minimization problem is transformed into the maximization of the lower bound of the log-likelihood [26]:
Z
p(y|fX )p(fX |fX? )p(fX? )
max log p(y) ? max
q(fX , fX? ) log
dfX dfX?
? m,
?
?
q(fX , fX? )
?,X,
? S
Z
p(y|fX )p(fX? )
= max
p(fX |fX? )q(fX? ) log
dfX dfX?
?
?
q(fX? )
?,X,m,
? S
? X + ? 2 I) ? 1 Tr(KX ? K
? X ).
= max log N (y|0, K
(7)
?
2? 2
?,X
? for treatment of non-conjugate
The last equality results from exact maximization over m
? and S;
? whereas p(f ? ) and p(fX |f ? )
likelihoods, see [22]. We note that q(fX? ) is a function of m
? and S,
X
X
? As a result, X
? become variational parameters that can be optimized without
are functions of X.
over-fitting. Compared with (5), the variational approach in (7) regularizes the learning with penalty
? X ) and therefore exhibits better generalization performance. Several subsequent works
Tr(KX ? K
employ similar strategies: Alvarez et al. [3] adopt the same variational approach in the multi-output
regression setting with scaled basis functions, and Abdel-Gawad et al. [1] use expectation propagation
to solve for the approximate posterior under the same factorization.
3
3
Incremental Variational Sparse Gaussian Process Regression
Despite leveraging sparsity, the batch solution to the variational objective in (7) requires O(M 2 N )
operations and access to all of the training data during each optimization step [26], which means
that learning from large datasets is still infeasible. Recently, several attempts have been made to
incrementally solve the variational sparse GPR problem in order to learn better models from large
datasets [1, 9, 10]. The key idea is to rewrite (7) explicitly into the sum of individual observations:
Z
p(y|fX )p(fX? )
max
p(fX |fX? )q(fX? ) log
dfX dfX?
?
?
q(fX? )
?,X,m,
? S
!
Z
N
X
p(fX? )
dfX? .
(8)
= max
q(fX? )
Ep(fxi |fX? ) [log p(yi |fxi )] + log
? m,
?
q(fX? )
?,X,
? S
i=1
? is identical to the problem of stochastic variational
The objective function in (8), with fixed X,
inference [11] of conjugate exponential families. Hensman et al. [9] exploit this idea to incrementally
update the statistics m
? and S? via stochastic natural gradient ascent,1 which, at the tth iteration, takes
the direction derived from the limit of maximizing (8) subject to KLsym (qt (fX? )||qt?1 (fX? )) < as
? 0. Natural gradient ascent considers the manifold structure of probability distribution derived
?
from KL divergence and is known to be Fisher efficient [4]. Although the optimal inducing points X,
? should be updated given new observations, it is difficult to design natural
like the statistics m
? and S,
? online. Because p(fX |f ? ) in (8) depends on all
gradient ascent for learning the inducing points X
X
the observations, evaluating the divergence with respect to p(fX |fX? )q(fX? ) over iterations becomes
infeasible.
We propose a novel approach to incremental variational sparse GPR, iVSGPR, that works by reformulating (7) in its RKHS dual form. This avoids the issue of the posterior approximation
p(fX |fX? )q(fX? ) referring to all observations. As a result, we can perform stochastic approximation
of (7) while monitoring the KL divergence between the posterior approximates due to the change
? and X
? across iterations. Specifically, we use stochastic mirror ascent [17] in the space
of m,
? S,
of probability densities in RKHS, which was recently proven to be as efficient as stochastic natural
gradient ascent [19]. In each iteration, iVSGPR solves a subproblem of fractional Bayesian inference,
which we show can be formulated into a standard variational sparse GPR of the size of a minibatch in
O(M 2 Nm + M 3 ) operations, where Nm is the size of a minibatch.
3.1
Dual Representations of Gaussian Processes in RKHS
An RKHS H is a Hilbert space of functions satisfying the reproducing property: ?kx ? H such that
?f ? H, f (x) = hf, kx iH . In general, H can be infinite-dimensional and uniformly approximate
continuous functions on a compact set [16]. To simplify the notation we write kxT f for hf, kx iH , and
f T Lg for hf, Lgi, where f, g ? H and L : H 7? H, even if H is infinite-dimensional.
A Gaussian process GP(m, k) has a dual representation in an RKHS H [12]: there exists ? ? H and
a positive semi-definite linear operator ? : H 7? H such that for any x, x0 ? X , ??x , ?x0 ? H,
m(x) = ??Tx ?,
k(x, x0 ) = ?2 ?Tx ??x0 .
(9)
That is, the mean function has a realization ? in H, which is defined by the reproducing kernel
? x0 ) = ?2 ?T ?x0 ; the covariance function can be equivalently represented by a linear operator
k(x,
x
?. In shorthand, with abuse of notation, we write N (f |?, ?).2 Note that we do not assume the
samples from GP(m, k) are in H. In the following, without loss of generality, we assume the GP
prior considered in regression has ? = 0 and ? = I. That is, m(x) = 0 and k(x, x0 ) = ?2 ?Tx ?x0 .
3.1.1
Subspace Parametrization of the Approximate Posterior
The full GP posterior approximation p(fX |fX? )q(fX? ) in (7) can be written equivalently in a subspace
? M :
parametrization using {?x?i ? H|?
xi ? X}
i=1
?
? = ?X? a,
? = I + ? ? A?T? ,
?
X
X
(10)
? was fixed in their experiments, it can potentially be updated by stochastic gradient ascent.
Although X
Because a GP can be infinite-dimensional, it cannot define a density but only a Gaussian measure. The
notation N (f |?, ?) is used to indicate that the Gaussian measure can be defined, equivalently, by ? and ?.
1
2
4
? 0, and ? ? : RM 7? H is defined as ? ? a =
where a ? RM , A ? RM ?M such that ?
X
X
PM
? and define ?x? to satisfy ?T ?
? S)
?
=
m.
?
By
(10),
? ) = N (fX
? |m,
?i . Suppose q(fX
i
?
i=1 ai ?x
X
?
m
? = KX? a and S = KX? + KX? AKX? , which implies the relationship
?1
a = KX
?
? m,
?1 ? ?1
?1
A = KX
? SKX
? ? KX
? ,
(11)
where the covariances related to the inducing functions are defined as KX? = ?TX? ?X? and KX,X? =
?1
?1 ? ?1
??TX ?X? . The sparse structure results in f (x) ? GP(kx,X? KX
? kx,x + kx,X? (KX
? m,
? SKX
? ?
R
?1
KX? )kX,x
p(f (x)|fX? )q(fX? )dfX? , the posterior GP found in (7), where
? ), which is the same as
kx,x = k(x, x) and kx,X? = ??Tx ?X? . We note that the scaling factor ? is associated with the
evaluation of f (x), not the inducing functions fX? . In addition, we distinguish the hyperparameter s
?
(e.g. length scale) that controls the measurement basis ?x from the parameters in inducing points X.
? 0. More precisely, because (10) is
A subspace parametrization corresponds to a full GP if ?
? and the inducing points X,
? the family of subspace
completely determined by the statistics m,
? S,
parametrized GPs lie on a nonlinear submanifold in the space of all GPs (the degenerate GP in (4) is
a special case if we allow I in (10) to be ignored).
3.1.2
Sparse Gaussian Processes Regression in RKHS
We now reformulate the variational inference problem (7) in RKHS3 . Following the previous section,
the sparse GP structure on the posterior approximate q(fX , fX? ) in (6) has a corresponding dual
? Specially, q(f ) and q(fX , f ? ) are related as follows:
representation in RKHS q(f ) = N (f |?
?, ?).
X
? X |1/2 ,
q(f ) ? p(fX |fX? )q(fX? )|KX? |1/2 |KX ? K
(12)
in which the determinant is due to the change of measure. The equality (12) allows us to rewrite (7)
in terms of q(f ) simply as
Z
p(y|f )p(f )
max L(q(f )) = max q(f ) log
df,
(13)
q(f )
q(f )
q(f )
or equivalently as minq(f ) KL[q(f )||p(f |y)]. That is, the heuristically motivated variational problem (7) is indeed minimizing a proper KL-divergence between two Gaussian measures. A similar
justification on (7) is given rigorously in [14] in terms of KL-divergence minimization between
Gaussian processes, which can be viewed as a dual of our approach. Due to space limitations, the
proofs of (12) and the equivalence between (7) and (13) can be found in the Appendix.
The benefit of the formulation of (13) is that in its sampling form,
!
Z
N
X
p(f )
max q(f )
log p(yi |f ) + log
df,
q(f )
q(f )
i=1
(14)
? m,
the approximate posterior q(f ) nicely summarizes all the variational parameters X,
? and S? without
referring to the samples as in p(fX |fX? )q(fX? ). Therefore, the KL-divergence of q(f ) across iterations
can be used to regulate online learning.
3.2
Incremental Learning
Stochastic mirror ascent [17] considers (non-)Euclidean structure on variables induced by a Bregman
divergence (or prox-function) [5] in convex optimization. We apply it to solve the variational
inference problem in (14), because (14) is convex in the space of probabilities [17]. Here, we ignore
the dependency of q(f ) on f for simplicity. At the tth iteration, stochastic mirror ascent solves the
subproblem
Z
? t , yt )q(f )df ? KL[q||qt ],
qt+1 = arg max ?t ?L(q
(15)
q
3
Here we assume the set X is finite and countable. This assumption suffices in learning and allows us to
restrict H be the finite-dimensional span of ?X . Rigorously, for infinite-dimensional H, the equivalence can be
written in terms of Radon?Nikodym derivative between q(f )df and normal Gaussian measure, where q(f )df
denotes a Gaussian measure that has an RKHS representation given as q(f )
5
? t , yt ) is the sampled subgradient of L with respect to q when the
where ?t is the step size, ?L(q
observation is
converges in O(t?1/2 ) if (15) is solved within numerical error
P(xt , yt ). The
P algorithm
2
t such that t ? O( ?t ) [7].
The subproblem (15) is actually equivalent to sparse variational GP regression with a general Gaussian
prior. By definition of L(q) in (14), (15) can be derived as
Z
p(f )
qt+1 = arg max ?t q(f ) N log p(yt |f ) + log
df ? KL[q||qt ]
q
qt (f )
Z
p(yt |f )N ?t p(f )?t qt1??t (f )
= arg max q(f ) log
df.
(16)
q
q(f )
This equation is equivalent to (13) but with the prior modified to p(f )?t qt (f )1??t and the likelihood
modified to p(yi |f )N ?t . Because p(f ) is an isotropic zero-mean Gaussian, p(f )?t qt (f )1??t has the
subspace parametrization expressed in the same basis functions as qt . Suppose qt has mean ?
?t and
?t
1??t
? ?1
?
precision ?
.
Then
p(f
)
q
(f
)
is
equivalent
to
N
(f
|?
?
,
?)
up
to
a
constant
factor,
where
t
t
?1
?1
?1
?1
?1
?1
?
?
?
?
?
?
?t = (1 ? ?t )?t ?t ?
?t and ?t = (1 ? ?t )?t + ?t I. By (10), ?t = I ? ?X? (At + KX? ) ?X?
?1
? ?1
for some At , and therefore ?
= I ? (1 ? ?t )?X? (A?1
?X? , which is expressed in the
?)
t
t + KX
same basis. In addition, by (12), (16) can be further written in the form of (7) and therefore solved by
a standard sparse variational GPR program with modified m
? and S? (Please see Appendix for details).
Although we derived the equations for a single observation, minibatchs can be used with the same
N ?t
QNm
p(yti |f ) Nm . The
convergence rate and reduced variance by changing the factor p(yt |f )N ?t to i=1
hyperparameters ? = (s, ?, ?) in the GP can be updated along with the variational parameters using
R
)p(f )
stochastic gradient ascent along the gradient of qt (f ) log p(yqt |f
df .
t (f )
3.3
Related Work
The subproblem (16) is equivalent to first performing stochastic natural gradient ascent [11] of q(f )
in (14) and then projecting the distribution back to the low-dimensional manifold specified by the
subspace parametrization. At the tth iteration, define qt0 (f ) := p(yt |f )N ?t p(f )?t qt (f )1??t . Because
a GP can be viewed as Gaussian measure in an infinite-dimensional RKHS, qt0 (f ) (16) can be viewed
as the result of taking natural stochastic gradient ascent with step size ?t from qt (f ). Then (16)
becomes minq KL[q||qt0 ] in order to project qt0 back to subspace parametrization specified by M basis
functions. Therefore, (16) can also be viewed as performing stochastic natural gradient ascent with a
? which controls the inducing
KL divergence projection. From this perspective, we can see that if X,
functions, are fixed in the subproblem (16), iVSGPR degenerates to the algorithm of Hensman et
al. [9].
Recently, several researches have considered the manifold structure induced by KL divergence in
Bayesian inference [7, 25, 13]. Theis and Hoffman [25] use trust regions to mitigate the sensitivity of
stochastic variational inference to choices of hyperparameters and initialization. Let ?t be the size
of the trust region. At the tth iteration, it solves the objective maxq L(q) ? ?t KL[q||qt ], which is
the same as subproblem (16) if applied to (14). The difference is that in (16) ?t is a decaying step
sequence in stochastic mirror ascent, whereas ?t is manually selected. A similar formulation also
appears in [13], where the part of L(q) non-convex to the variational parameters is linearized. Dai et
? in the setting with
al. [7] use particles or a kernel density estimator to approximate the posterior of X
low-rank GP prior. By contrast, we follow Titsias?s variational approach [26] to adopt a full GP as
? and
the approximate posterior, and therefore avoid the difficulties in estimating the posterior of X
focus on the approximate posterior q(f ) related to the function values.
The stochastic mirror ascent framework sheds light on the convergence condition of the algorithm.
P As
pointed out
in
Dai
et
al.
[7],
the
subproblem
(15)
can
be
solved
up
to
accuracy
as
long
as
t is
t
?
P
order O( ?t2 ), where ?t ? O(1/ t) [17]. Also, Khan et al. [13] solves a linearized approximation
of (15) in each step and reports satisfactory empirical results. Although variational sparse GPR (16) is
? and is often solved by nonlinear conjugate gradient ascent, empirically
a nonconvex optimization in X
the objective function increases most significantly in the first few iterations. Therefore, based on the
results in [7], we argue that in online learning (16) can be solved approximately by performing a
small fixed number of line searches.
6
4
Experiments
We compare our method iVSGPR with VSGPRsvi the state-of-the-art variational sparse GPR method
based on stochastic variational inference [9], in which i.i.d. data are sampled from the training
dataset to update the models. We consider a zero-mean GP prior generated by a squared-exponential
QD
?(xd ?x0d )2
),
with automatic relevance determination (SE-ARD) kernel [20] k(x, x0 ) = ?2 d=1 exp(
2s2d
where sd > 0 is the length scale of dimension d and D is the dimensionality of the input. For the
inducing functions, we modified the multi-scale kernel in [27] to
?xT ?x0
=
D
Y
i=d
2lx,d lx0 ,d
2 + l2
lx,d
x0 ,d
!1/2
exp ?
D
X
kxd ? x0 k2
d=1
2 +
lx,d
d
lx2 0 ,d
!
,
(17)
where lx,d is the length-scale parameter. The definition (17) includes the SE-ARD kernel as a special
D
case, which can be recovered by identifying ?x = ?x and (lx,d )D
d=1 = (sd )d=1 , and hence their cross
covariance can be computed.
In the following experiments, we set the number inducing functions to 512. All models were
initialized with the same hyperparameters and inducing points: the hyperparameters were selected as
the optimal ones in the batch variational sparse GPR [26] trained on subset of the training dataset
of size 2048; the inducing points were initialized
as random samples from the first minibatch. We
?
chose the learning rate to be ?t = (1 + t)?1 , for stochastic mirror ascent to update the posterior
approximation; the learning rate for the stochastic gradient ascent to update the hyperparameters is
set to 10?4 ?t . We evaluate the models in terms of the normalized mean squared error (nMSE) on a
held-out test set after 500 iterations.
We performed experiments on three real-world robotic datasets datasets, kin40k4 , SARCOS5 , KUKA6 ,
and three variations of iVSGPR: iVSGPR5 , iVSGPR10 , and iVSGPRada .7 For the kin40k and SARCOS
datasets, we also implemented VSGPR?svi , which uses stochastic variational inference to update m
?
and S? but fixes hyperparameters and inducing points as the solution to the batch variational sparse
GPR [26] with all of the training data. Because VSGPR?svi reflects the perfect scenario of performing
stochastic approximation under the selected learning rate, we consider it as the optimal goal we want
to approach.
The experimental results of kin40k and SARCOS are summarized in Table 1a. In general, the adaptive
scheme iVSGPRada performs the best, but we observe that even performing a small fixed number of
iterations ( iVSGPR5 , iVSGPR10 ) results in performance that is close to, if not better than VSGPR?svi .
Possible explanations are that the change of objective function in gradient-based algorithms is
dominant in the first few iterations and that the found inducing points and hyper-parameters have
finite numerical resolution in batch optimization. For example, Figure 1a shows the change of test
error over iterations in learning joint 2 of SARCOS dataset. For all methods, the convergence rate
improves with a larger minibatch. In addition, from Figure 1b, we observe that the required number
of steps iVSGPRada needed to solve (16) decays with the number of iterations; only a small number
line searches is required after the first few iterations.
Table 1b and Table 1c show the experimental results on two larger datasets. In the experiments, we
mixed the offline and online partitions in the original KUKA dataset and then split 90% into training
and 10% into testing datasets in order to create an online i.i.d. streaming scenario. We did not
compare to VSGPR?svi on these datasets, since computing the inducing points and hyperparameters
in batch is infeasible. As above, iVSGPRada stands out from other models, closely followed by
iVSGPR10 . We found that the difference between VSGPRsvi and iVSGPRs is much greater on these
larger real-world benchmarks.
Auxiliary experimental results illustrating convergence for all experiments summarized in Tables 1a, 1b, and 1c are included in the Appendix.
4
kin40k: 10000 training data, 30000 testing data, 8 attributes [23]
SARCOS: 44484 training data, 4449 testing data, 28 attributes. http://www.gaussianprocess.org/gpml/data/
6
KUKA1&KUKA2: 17560 offline data, 180360 online data, 28 attributes. [15]
7
The number in the subscript denotes the number of function calls allowed in nonlinear conjugate gradient
descent [20] to solve subproblems (16) and ada denotes (16) is solved until the relative function change is less
than 10?5 .
5
7
kin40k
SARCOS J1
SARCOS J2
SARCOS J3
SARCOS J4
SARCOS J5
SARCOS J6
SARCOS J7
VSGPRsvi
iVSGPR5
iVSGPR10
iVSGPRada
VSGPR?
svi
0.0959
0.0247
0.0193
0.0125
0.0048
0.0267
0.0300
0.0101
0.0648
0.0228
0.0176
0.0112
0.0044
0.0243
0.0259
0.0090
0.0608
0.0214
0.0159
0.0104
0.0040
0.0229
0.0235
0.0082
0.0607
0.0210
0.0156
0.0103
0.0038
0.0226
0.0229
0.0081
0.0535
0.0208
0.0156
0.0104
0.0039
0.0230
0.0230
0.0101
(a) kin40k and SARCOS
J1
J2
J3
J4
J5
J6
J7
VSGPRsvi
iVSGPR5
iVSGPR10
iVSGPRada
0.1699
0.1530
0.1873
0.1376
0.1955
0.1766
0.1374
0.1455
0.1305
0.1554
0.1216
0.1668
0.1645
0.1357
0.1257
0.1221
0.1403
0.1151
0.1487
0.1573
0.1342
0.1176
0.1138
0.1252
0.1108
0.1398
0.1506
0.1333
J1
J2
J3
J4
J5
J6
J7
(b) KUKA1
VSGPRsvi
iVSGPR5
iVSGPR10
iVSGPRada
0.1737
0.1517
0.2108
0.1357
0.2082
0.1925
0.1329
0.1452
0.1312
0.1818
0.1171
0.1846
0.1890
0.1309
0.1284
0.1183
0.1652
0.1104
0.1697
0.1855
0.1287
0.1214
0.1081
0.1544
0.1046
0.1598
0.1809
0.1275
(c) KUKA2
Table 1: Testing error (nMSE) after 500 iterations. Nm = 2048; Ji denotes the ith joint.
(a) Test error
(b) Functions calls of iVSGPRada
Figure 1: Online learning results of SARCOS joint 2. (a) nMSE evaluated on the held out test set; the
dash lines and the solid lines denote the results with Nm = 512 and Nm = 2048, respectively. (b)
Number of function calls used by iVSGPRada in solving (16) (A maximum of 100 calls is imposed )
5
Conclusion
We propose a stochastic approximation of variational sparse GPR [26], iVSGPR. By reformulating
the variational inference in RKHS, the update of the statistics of the inducing functions and the
inducing points can be unified as stochastic mirror ascent on probability densities to consider the
manifold structure. In our experiments, iVSGPR shows better performance than the direct adoption of
stochastic variational inference to solve variational sparse GPs. As iVSGPR executes a fixed number
of operations for each minibatch, it is suitable for applications where training data is abundant, e.g.
sensory data in robotics. In future work, we are interested in applying iVSGPR to extensions of sparse
Gaussian processes such as GP-LVMs and dynamical system modeling.
References
[1] Ahmed H Abdel-Gawad, Thomas P Minka, et al. Sparse-posterior gaussian processes for general likelihoods. arXiv preprint arXiv:1203.3507, 2012.
[2] Mauricio Alvarez and Neil D Lawrence. Sparse convolved gaussian processes for multi-output regression.
In Advances in neural information processing systems, pages 57?64, 2009.
[3] Mauricio A Alvarez, David Luengo, Michalis K Titsias, and Neil D Lawrence. Efficient multioutput gaussian processes through variational inducing kernels. In International Conference on Artificial Intelligence
and Statistics, pages 25?32, 2010.
8
[4] Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276,
1998.
[5] Arindam Banerjee, Srujana Merugu, Inderjit S Dhillon, and Joydeep Ghosh. Clustering with bregman
divergences. The Journal of Machine Learning Research, 6:1705?1749, 2005.
[6] Lehel Csat? and Manfred Opper. Sparse on-line gaussian processes. Neural computation, 14(3):641?668,
2002.
[7] Bo Dai, Niao He, Hanjun Dai, and Le Song. Scalable bayesian inference via particle mirror descent. arXiv
preprint arXiv:1506.03101, 2015.
[8] Anibal Figueiras-vidal and Miguel L?zaro-gredilla. Inter-domain gaussian processes for sparse inference
using inducing features. In Advances in Neural Information Processing Systems, pages 1087?1095, 2009.
[9] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. arXiv preprint
arXiv:1309.6835, 2013.
[10] Trong Nghia Hoang, Quang Minh Hoang, and Kian Hsiang Low. A unifying framework of anytime sparse
gaussian process regression models with stochastic variational inference for big data. In Proc. ICML, pages
569?578, 2015.
[11] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The
Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[12] Irina Holmes and Ambar N Sengupta. The gaussian radon transform and machine learning. Infinite
Dimensional Analysis, Quantum Probability and Related Topics, 18(03):1550019, 2015.
[13] Mohammad E Khan, Pierre Baqu?, Fran?ois Fleuret, and Pascal Fua. Kullback-leibler proximal variational
inference. In Advances in Neural Information Processing Systems, pages 3384?3392, 2015.
[14] Alexander G de G Matthews, James Hensman, Richard E Turner, and Zoubin Ghahramani. On sparse
variational methods and the kullback-leibler divergence between stochastic processes. In Proceedings of
the Nineteenth International Conference on Artificial Intelligence and Statistics, 2016.
[15] Franziska Meier, Philipp Hennig, and Stefan Schaal. Incremental local gaussian regression. In Advances
in Neural Information Processing Systems, pages 972?980, 2014.
[16] Charles A Micchelli, Yuesheng Xu, and Haizhang Zhang. Universal kernels. The Journal of Machine
Learning Research, 7:2651?2667, 2006.
[17] Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609,
2009.
[18] Joaquin Qui?onero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian
process regression. The Journal of Machine Learning Research, 6:1939?1959, 2005.
[19] Garvesh Raskutti and Sayan Mukherjee. The information geometry of mirror descent. Information Theory,
IEEE Transactions on, 61(3):1451?1457, 2015.
[20] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. 2006.
[21] Matthias Seeger, Christopher Williams, and Neil Lawrence. Fast forward selection to speed up sparse
gaussian process regression. In Artificial Intelligence and Statistics 9, number EPFL-CONF-161318, 2003.
[22] Rishit Sheth, Yuyang Wang, and Roni Khardon. Sparse variational inference for generalized gp models. In
Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1302?1311,
2015.
[23] Edward Snelson and Zoubin Ghahramani. Sparse gaussian processes using pseudo-inputs. In Advances in
neural information processing systems, pages 1257?1264, 2005.
[24] Edward Snelson and Zoubin Ghahramani. Local and global sparse gaussian process approximations. In
International Conference on Artificial Intelligence and Statistics, pages 524?531, 2007.
[25] Lucas Theis and Matthew D Hoffman. A trust-region method for stochastic variational inference with
applications to streaming data. arXiv preprint arXiv:1505.07649, 2015.
[26] Michalis K Titsias. Variational learning of inducing variables in sparse gaussian processes. In International
Conference on Artificial Intelligence and Statistics, pages 567?574, 2009.
[27] Christian Walder, Kwang In Kim, and Bernhard Sch?lkopf. Sparse multiscale gaussian process regression.
In Proceedings of the 25th international conference on Machine learning, pages 1112?1119. ACM, 2008.
9
| 6473 |@word determinant:1 version:1 illustrating:1 nd:1 heuristically:1 linearized:2 covariance:12 tr:2 solid:1 moment:1 precluding:1 rkhs:11 recovered:1 written:3 john:1 multioutput:1 numerical:2 subsequent:1 partition:1 j1:3 christian:1 designed:1 update:12 juditsky:1 intelligence:5 selected:3 isotropic:1 parametrization:8 ith:1 manfred:1 sarcos:13 blei:1 philipp:1 lx:5 org:1 zhang:1 along:2 quang:1 direct:1 predecessor:1 become:1 shorthand:2 lx2:1 fitting:2 haizhang:1 introduce:1 x0:22 inter:1 indeed:1 multi:3 becomes:2 spain:1 project:1 bounded:1 notation:4 maximizes:1 estimating:1 unified:1 finding:1 ghosh:1 pseudo:2 mitigate:1 xd:1 shed:1 k2:1 scaled:1 rm:3 control:2 mauricio:2 before:1 positive:2 accordance:1 treat:1 modify:1 limit:1 sd:2 local:2 despite:2 subscript:1 abuse:2 approximately:1 chose:1 initialization:1 equivalence:2 heteroscedastic:1 factorization:1 nemirovski:1 adoption:1 zaro:1 testing:4 practice:2 definite:2 svi:5 empirical:4 universal:1 significantly:1 matching:1 projection:1 zoubin:3 cannot:1 ga:2 close:1 operator:3 selection:1 storage:1 applying:1 www:1 equivalent:5 deterministic:1 gawad:2 yt:7 maximizing:3 imposed:1 straightforward:1 williams:2 minq:2 convex:3 focused:1 resolution:1 simplicity:2 identifying:1 parameterizing:1 estimator:1 holmes:1 kuka:1 notion:1 fx:78 variation:2 justification:1 updated:3 suppose:2 exact:1 programming:1 gps:6 us:1 carl:2 satisfying:1 mukherjee:1 ep:1 subproblem:7 preprint:4 solved:7 capture:1 wang:2 baqu:1 region:3 trade:1 complexity:4 rigorously:2 trained:1 solving:2 rewrite:2 titsias:4 basis:8 completely:1 compactly:2 joint:3 indirect:1 represented:1 tx:6 fast:2 artificial:5 hyper:1 choosing:1 widely:1 solve:7 valued:1 larger:3 nineteenth:1 amari:1 statistic:15 qt1:1 neil:4 gp:30 jointly:1 transform:1 online:10 sequence:2 kxt:1 matthias:1 propose:5 srujana:1 j2:3 realization:1 degenerate:3 inducing:36 figueiras:1 franziska:1 convergence:5 incremental:10 converges:2 perfect:1 miguel:1 ard:2 qt:15 edward:4 solves:5 implemented:1 auxiliary:1 involves:1 indicate:1 implies:1 qd:1 ois:1 direction:1 drawback:2 dtc:1 closely:1 attribute:3 stochastic:36 shun:1 suffices:1 generalization:1 fix:1 extension:1 considered:3 normal:1 exp:2 lawrence:4 matthew:3 major:1 achieves:1 early:1 adopt:2 proc:1 gaussianprocess:1 create:1 reflects:1 hoffman:3 minimization:2 stefan:1 j7:3 gaussian:44 mation:1 modified:4 avoid:1 gatech:2 gpml:1 encode:1 derived:4 focus:1 schaal:1 rank:3 likelihood:8 contrast:1 seeger:1 greedily:1 kim:1 posteriori:1 inference:24 streaming:4 epfl:1 entire:1 lehel:1 transformed:1 interested:1 j5:3 issue:1 dual:6 arg:3 lvms:1 pascal:1 sheth:1 lucas:1 proposes:1 sengupta:1 art:2 special:3 trong:1 marginal:1 once:2 nicely:1 sampling:1 manually:1 identical:1 icml:2 future:1 t2:1 report:1 intelligent:2 simplify:1 primarily:1 employ:1 few:3 richard:1 divergence:12 individual:1 replaced:1 geometry:1 irina:1 attempt:2 freedom:1 atlanta:2 interest:1 evaluation:1 chong:1 light:1 held:2 bregman:2 integral:1 conduct:1 euclidean:1 initialized:2 abundant:1 joydeep:1 modeling:1 maximization:2 ada:1 subset:1 submanifold:1 too:1 dependency:1 corrupted:1 proximal:1 referring:2 guanghui:1 density:7 international:6 sensitivity:1 siam:1 probabilistic:2 s2d:1 squared:2 nm:6 conf:1 derivative:1 li:4 prox:1 de:1 summarized:4 includes:3 satisfy:1 explicitly:1 depends:1 performed:1 view:2 candela:1 hf:3 decaying:1 arkadi:1 accuracy:2 variance:1 merugu:1 efficiently:1 kuka2:2 bayesian:3 lkopf:1 lgi:1 onero:1 monitoring:1 cc:1 j6:3 executes:1 definition:2 against:1 minka:1 james:2 associated:2 proof:1 degeneracy:1 sampled:2 dataset:6 treatment:1 anytime:1 fractional:1 dimensionality:1 improves:1 hilbert:3 yuesheng:1 actually:1 back:2 appears:1 follow:1 alvarez:3 fua:1 formulation:4 evaluated:1 bboots:1 generality:1 until:1 joaquin:1 trust:3 christopher:2 nonlinear:3 banerjee:1 propagation:1 incrementally:4 minibatch:6 multiscale:1 x0d:1 normalized:1 true:3 equality:2 hence:1 reformulating:2 dhillon:1 satisfactory:1 leibler:2 during:2 please:1 generalized:1 mohammad:1 performs:3 reasoning:1 variational:53 snelson:2 novel:2 recently:4 arindam:1 charles:1 garvesh:1 raskutti:1 empirically:1 ji:1 he:1 approximates:1 measurement:2 ai:1 paisley:1 automatic:1 trivially:1 pm:1 pointed:1 particle:2 access:1 j4:3 yqt:1 nicolo:1 dominant:1 posterior:23 recent:2 perspective:1 scenario:2 nonconvex:1 yi:7 dai:4 greater:1 novelty:1 semi:1 full:10 infer:2 faster:1 match:1 determination:1 cross:3 long:1 ahmed:1 prediction:1 j3:3 regression:21 scalable:1 expectation:1 df:8 arxiv:8 iteration:18 kernel:12 represent:1 robotics:3 background:1 want:2 whereas:2 addition:3 sch:1 operate:1 unlike:2 specially:1 ascent:24 induced:3 subject:1 byron:1 leveraging:2 call:4 leverage:3 split:1 xj:2 fit:2 independence:1 restrict:1 reduce:1 idea:3 motivated:1 reuse:1 penalty:1 song:1 roni:1 luengo:1 ignored:1 fleuret:1 se:2 amount:1 nonparametric:2 tth:4 reduced:1 http:1 nghia:1 kian:1 shapiro:1 per:1 csat:2 write:3 hyperparameter:1 hennig:1 ichi:1 key:1 lan:1 changing:1 subgradient:1 sum:2 parameterized:2 uncertainty:1 family:4 fran:1 fusi:1 appendix:3 scaling:3 summarizes:1 radon:2 qui:1 bound:1 followed:1 distinguish:1 dash:1 cheng:1 encountered:1 g:2 precisely:1 speed:1 extremely:1 span:1 performing:5 embrace:1 gredilla:1 conjugate:5 across:2 heal:1 projecting:1 equation:2 fail:1 needed:1 merit:1 adopted:1 generalizes:1 operation:4 vidal:1 apply:1 observe:2 fxi:2 regulate:1 pierre:1 batch:8 alternative:1 convolved:1 original:3 thomas:1 denotes:9 remaining:1 assumes:1 michalis:2 clustering:1 unifying:2 anatoli:1 exploit:1 ghahramani:3 micchelli:1 objective:6 strategy:1 diagonal:2 niao:1 exhibit:1 gradient:19 qt0:4 subspace:7 majority:1 parametrized:2 topic:1 manifold:6 skx:2 collected:1 considers:3 argue:1 suboptimally:1 length:3 modeled:1 relationship:1 reformulate:1 providing:1 ching:1 minimizing:2 equivalently:4 difficult:1 lg:1 potentially:1 subproblems:1 kuka1:2 design:1 countable:1 proper:1 unknown:1 perform:2 boot:1 observation:10 datasets:10 benchmark:1 finite:6 minh:1 descent:3 walder:1 regularizes:1 rn:2 reproducing:4 david:2 inverting:1 meier:1 required:2 kl:14 specified:2 optimized:1 khan:2 learned:1 barcelona:1 maxq:1 nip:1 anibal:1 dynamical:1 kin40k:5 sparsity:1 program:1 max:15 memory:2 explanation:1 belief:1 suitable:1 natural:10 rely:1 treated:2 difficulty:1 turner:1 fitc:1 scheme:2 technology:2 brief:1 naive:1 vsgpr:5 prior:9 l2:1 theis:2 relative:1 fully:1 loss:1 mixed:1 limitation:1 proven:1 hoang:3 abdel:2 degree:1 nikodym:1 summary:1 last:1 rasmussen:2 infeasible:4 offline:2 allow:1 institute:4 taking:1 kwang:1 sparse:51 distributed:1 benefit:1 opper:2 hensman:5 evaluating:1 avoids:1 dimension:1 world:2 stand:1 sensory:1 made:2 adaptive:1 quantum:1 forward:1 polynomially:1 transaction:1 approximate:17 compact:1 ignore:1 kullback:2 bernhard:1 yuyang:1 approxi:1 robotic:1 global:1 rishit:1 xi:13 alternatively:1 continuous:2 latent:4 dfx:8 search:2 table:5 learn:2 robust:2 domain:1 diag:1 did:1 big:2 noise:2 hyperparameters:17 profile:1 allowed:1 akx:1 nmse:3 xu:1 ivsgpr:13 georgia:2 hsiang:1 precision:1 khardon:1 exponential:3 lie:1 gpr:32 hanjun:1 qnm:1 sayan:1 xt:2 decay:1 exists:1 ih:2 mirror:12 kx:42 simply:1 expressed:3 inderjit:1 bo:1 corresponds:1 acm:1 conditional:3 identity:1 viewed:5 formulated:2 goal:1 fisher:1 yti:1 change:5 included:1 specifically:2 infinite:7 uniformly:1 determined:1 called:1 experimental:3 pragmatic:1 formally:1 alexander:2 relevance:1 evaluate:1 |
6,051 | 6,474 | Combining Adversarial Guarantees and
Stochastic Fast Rates in Online Learning
Wouter M. Koolen
Centrum Wiskunde & Informatica
Science Park 123, 1098 XG
Amsterdam, the Netherlands
wmkoolen@cwi.nl
Peter Gr?nwald
CWI and Leiden University
pdg@cwi.nl
Tim van Erven
Leiden University
Niels Bohrweg 1, 2333 CA
Leiden, the Netherlands
tim@timvanerven.nl
Abstract
We consider online learning algorithms that guarantee worst-case regret rates
in adversarial environments (so they can be deployed safely and will perform
robustly), yet adapt optimally to favorable stochastic environments (so they will
perform well in a variety of settings of practical importance). We quantify the
friendliness of stochastic environments by means of the well-known Bernstein
(a.k.a. generalized Tsybakov margin) condition. For two recent algorithms (Squint
for the Hedge setting and MetaGrad for online convex optimization) we show that
the particular form of their data-dependent individual-sequence regret guarantees
implies that they adapt automatically to the Bernstein parameters of the stochastic
environment. We prove that these algorithms attain fast rates in their respective
settings both in expectation and with high probability.
1
Introduction
We consider online sequential decision problems. We focus on full information settings, encompassing
such interaction protocols as online prediction, classification and regression, prediction with expert
advice or the Hedge setting, and online convex optimization (see Cesa-Bianchi and Lugosi 2006). The
goal of the learner is to choose a sequence of actions with small regret, i.e. such that his cumulative
loss is not much larger than the loss of the best fixed action in hindsight. This has to hold even in
the worst case, where the environment is controlled by an adversary. After three decades of research
there exist many algorithms and analysis techniques
for a variety of such settings. For many settings,
?
adversarial regret lower bounds of order T are known, along with matching individual sequence
algorithms [Shalev-Shwartz, 2011].
A more recent line of development is to design adaptive algorithms with regret guarantees that scale
with some more refined measure of the complexity of the problem. For the Hedge setting, results of
this type have been obtained, amongst others, by Cesa-Bianchi et al. [2007], De Rooij et al. [2014],
Gaillard et al. [2014], Sani et al. [2014], Even-Dar et al. [2008], Koolen et al. [2014], Koolen and
Van Erven [2015], Luo and Schapire [2015], Wintenberger [2015]. Interestingly, the price for such
adaptivity (i.e. the worsening of the worst-case regret bound) is typically extremely small (i.e. a
constant factor in the regret bound). For online convex optimization (OCO), many different types of
adaptivity have been explored, including by Crammer et al. [2009], Duchi et al. [2011], McMahan
and Streeter [2010], Hazan and Kale [2010], Chiang et al. [2012], Steinhardt and Liang [2014],
Orabona et al. [2015], Van Erven and Koolen [2016].
Here we are interested in the question of whether such adaptive results are strong enough to lead to
improved rates in the stochastic case when the data follow a ?friendly? distribution. In specific cases
it has been shown that fancy guarantees do imply significantly reduced regret. For example Gaillard
et al. [2014] present a generic argument showing that a certain kind of second-order regret guarantees
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
implies constant expected regret (the fastest possible rate) for i.i.d. losses drawn from a distribution
with a gap (between expected loss of the best and all other actions). In this paper we significantly
extend this result. We show that a variety of individual-sequence second-order regret guarantees
imply fast regret rates for distributions under much milder stochastic assumptions. In particular, we
will look at the Bernstein condition (see Bartlett and Mendelson 2006), which is the key to fast rates
in the batch setting. This condition provides a parametrised interpolation (expressed in terms of the
Bernstein exponent ? ? [0, 1]) between the friendly gap case (? = 1) and the stochastic worst case
(? = 0). We show that appropriate second-order guarantees automatically lead to adaptation to these
parameters, for both the Hedge setting and for OCO. In the Hedge setting, we build on the guarantees
available for the Squint algorithm [Koolen and Van Erven, 2015] and for OCO we rely on guarantees
achieved by MetaGrad [Van Erven and Koolen, 2016]. In both?cases we obtain regret rates of order
1??
T 2?? (Theorem 2). These rates include the slow worst-case T regime for ? = 0 and the fastest
(doubly) logarithmic regime for ? = 1. We show all this, not just in expectation (which is relatively
easy), but also with high probability (which is much harder). Our proofs make use of a a convenient
novel notation (ESI, for exponential stochastic inequality) which allows us to prove such results
simultaneously, and which is of independent interest (Definition 5). Our proofs use that, for bounded
losses, the Bernstein condition is equivalent to the ESI-Bernstein condition, which we introduce.
The next section introduces the two settings we consider and the individual sequence guarantees we
will use in each. It also reviews the stochastic criteria for fast rates and presents our main result.
In Section 3 we consider a variety of examples illustrating the breadth of cases that we cover. In
Section 4 we introduce ESI and give a high-level overview of our proof.
2
2.1
Setup and Main Result
Hedge Setting
We start with arguably the simplest setting of online prediction, the Hedge setting popularized by
Freund and Schapire [1997]. To be able to illustrate the full reach of our stochastic assumption
we will use a minor extension to countably infinitely many actions k ? N = {1, 2, . . .}, customarily
called experts. The protocol is as follows. Each round t the learner plays a probability mass function
wt = (wt1 , wt2 , . . .) on experts. Then the environment reveals the losses `t = (`1t , `2t , . . .) of the
experts, where each `kt ? [0, 1]. The learner incurs loss ?wt , `t ? = ?k wtk `kt . The regret after T rounds
compared to expert k is given by
T
RTk ?= ? (?wt , `t ? ? `kt ) .
t=1
The goal of the learner is to keep the regret small compared to any expert k. We will make use
of Squint by Koolen and Van Erven [2015], a self-tuning algorithm for playing wt . Koolen and
Van Erven [2015, Theorem 4] show that Squint with prior probability mass function ? = (? 1 , ? 2 , . . .)
guarantees
?
VTk KTk + KTk where KTk = O(? ln ? k + ln ln T )
for any expert k.
(1)
RTk ?
2
Here VTk ?= ?Tt=1 (?wt , `t ? ? `kt ) is a second-order term that depends on the algorithm?s own pre?
dictions wt . It is well-known that with K experts the worst-case lower bound is ?( T ln K)
1
[Cesa-Bianchi and Lugosi, 2006, Theorem 3.7]. Taking a fat-tailed prior ?, for example ? k = k(k+1)
,
?
k
k
and using VT ? T , the above bound implies RT ? O ( T (ln k + ln ln T )), matching the lower
bound in some sense for all k simultaneously.
The question we study in this paper is what becomes of the regret when the sequence of losses
`1 , `2 , . . . is drawn from some distribution P, not necessarily i.i.d. But before we expand on such
stochastic cases, let us first introduce another setting.
2.2
Online Convex Optimization (OCO)
We now turn to our second setting called online convex optimization [Shalev-Shwartz, 2011]. Here
the set of actions is a compact convex set U ? Rd . Each round t the learner plays a point wt ? U.
2
Then the environment reveals a convex loss function `t ? U ? R. The loss of the learner is `t (wt ).
The regret after T rounds compared to u ? U is given by
T
RTu ?= ? (`t (wt ) ? `t (u)) .
t=1
The goal is small regret compared to any point u ? U. A common tool in the analysis of algorithms is
the linear upper bound on the regret obtained from convexity of `t (at non-differentiable points we
may take any sub-gradient)
T
? Tu ?= ? ?wt ? u, ?`t (wt )?.
RTu ? R
t=1
We will make use of (the full matrix version of) MetaGrad?
by Van Erven and Koolen [2016]. In their
? u ? O (DG T ) and
Theorem 8, they show that, simultaneously, R
T
?
u
u
? T ? V KT + DGKT where KT = O(d ln T )
for any u ? U,
(2)
R
T
where D bounds the two-norm diameter of U, G bounds ??`t (wt )?2 the two-norm of the gradients
2
and VTu ?= ?Tt=1 ?wt ? u, ?`t (w
matches the worst-case lower bound. The second
? t )? . The first bound
u
bound (2) may be a factor KT worse, as VT ? G2 D2 T by Cauchy-Schwarz. Yet in this paper
we will show fast rates in certain stochastic settings arising from (2). To simplify notation we will
assume from now on that DG = 1 (this can always be achieved by scaling the loss).
To talk about stochastic settings we will assume that the sequence `t of loss functions (and hence the
gradients ?`t (wt )) are drawn from a distribution P, not necessarily i.i.d. This includes the common
case of linear regression and classification where `t (u) = loss(?u, xt ?, yt ) with (xt , yt ) sampled i.i.d.
and loss a fixed one-dimensional convex loss function (e.g. square loss, absolute loss, log loss, hinge
loss, . . . ).
2.3
Parametrised Family of Stochastic Assumptions
We now recall the Bernstein [Bartlett and Mendelson, 2006] stochastic condition. The idea behind
this assumption is to control the variance of the excess loss of the actions in the neighborhood of the
best action.
We do not require that the losses are i.i.d., nor that the Bayes act is in the model. For the Hedge
?
setting it suffices if there is a fixed expert k ? that is always best, i.e. E[`kt ?Gt?1 ] = inf k E [`kt ?Gt?1 ]
almost surely for all t. (Here we denote by Gt?1 the sigma algebra generated by `1 , . . . , `t?1 , and the
almost surely quantification refers to the distribution of `1 , . . . , `t?1 .) Similarly, for OCO we assume
there is a fixed point u? ? U attaining minu?U E [`t (u)?Gt?1 ] at every round t. In either case there
may be multiple candidate k ? or u? . In the succeeding we assume that one is selected. Note that
for i.i.d. losses the existence of a minimiser is not such a strong assumption (if the loss functions
`t are continuous, it is even automatic in the OCO case due to compactness of U), while it is very
strong beyond i.i.d. Yet it is not impossible (and actually interesting) as we will show by example in
Section 3.
Based on the loss minimiser, we define the excess losses, a family of random variables indexed by
time t ? N and expert/point k ? N/u ? U as follows
xkt ?= `kt ? `kt
?
(Hedge)
xut ?= ?u ? u? , ?`t (u)? (OCO).
and
(3)
Note that for the Hedge setting we work with the loss directly. For OCO instead we talk about the
linear upper bound on the excess loss, for this is the quantity that needs to be controlled to make use
of the MetaGrad bound (2). With these variables in place, from this point on the story is the same for
Hedge and for OCO. So let us write F for either the set N of experts or the set U of points, and f ?
for k ? resp. u? , and let us consider the family {xft ? f ? F, t ? N}. We call f ? F predictors. With
this notation the Bernstein condition is the following.
Condition 1. Fix B ? 0 and ? ? [0, 1]. The family (3) satisfies the (B, ?)-Bernstein condition if
?
E [(xft )2 ?Gt?1 ] ? B E [xft ?Gt?1 ]
almost surely for all f ? F and rounds t ? N.
3
The point of this stochastic condition is that it implies that the variance in the excess loss gets smaller
the closer a predictor gets to the optimum in terms of expected excess loss.
Some authors refer to the ? = 1 case as the Massart condition. Van Erven et al. [2015] have shown
that the Bernstein condition is equivalent to the central condition, a fast-rate type of condition that has
been frequently used (without an explicit name) in density estimation under misspecification. Two
more equivalent conditions appear in our proof sketch Section 4. We compare all four formulations
in Appendix B.
2.4
Main Result
?
In the stochastic case we evaluate the performance of algorithms by RTf , i.e. the regret compared
?
to the predictor f ? with minimal expected loss. The expectation E[RTf ] is sometimes called the
pseudo-regret. The following result shows that second-order methods automatically adapt to the
Bernstein condition. (Proof sketch in Section 4.)
Theorem 2. In any stochastic setting satisfying the (B, ?)-Bernstein Condition 1, the guarantees (1)
for Squint and (2) for MetaGrad imply fast rates for the respective algorithms both in expectation
and with high probability. That is,
1
?
1??
E[RTf ] = O (KT2?? T 2?? ) ,
and for any ? > 0, with probability at least 1 ? ?,
1
?
1??
RTf = O ((KT ? ln ?) 2?? T 2?? ) ,
?
where for Squint KT ?= KTf from (1) and for MetaGrad KT is as in (2).
We see that Squint and MetaGrad adapt automatically to the Bernstein parameters of the distribution,
without any tuning. Theorem 2 only uses the form of the second-order bounds and does not depend
on the details of the algorithms, so it also applies to any other method with a second-order regret
bound. In particular it holds for Adapt-ML-Prod by Gaillard et al. [2014], which guarantees (1) with
KT = O(ln?F?+ln ln T ) for finite sets of experts. Here we focus on Squint as it also applies to infinite
sets. Appendix D provides an extension of Theorem 2 that allows using Squint with uncountable F.
Crucially, the bound provided by Theorem 2 is natural, and, in general, the best one can expect.
This can be seen from considering the statistical learning setting, which is a special case of our
setup. Here (xt , yt ) are i.i.d. ? P and F is a set of functions from X to a set of predictions A, with
`ft ?= `(yt , f (xt )) for some loss function ` ? Y ? A ? [0, 1] such as squared, 0/1, or absolute loss.
In this setting one usually considers excess risk, which is the expected loss difference between the
learned f? and the optimal f ? . The minimax expected (over training sample (xt , y t )) risk relative
to f ? is of order T ?1/2 (see e.g. Massart and N?d?lec [2006], Audibert [2009]). To get better risk
rates, one has to impose further assumptions on P. A standard assumption made in such cases is a
Bernstein condition with exponent ? > 0; see e.g. Koltchinskii [2006], Bartlett and Mendelson [2006],
Audibert [2004] or Audibert [2009]; see Van Erven et al. [2015] for how it generalizes the Tsybakov
margin and other conditions.
If F is sufficiently ?simple?, e.g. a class with logarithmic entropy numbers (see Appendix D), or, in
classification, a VC class, then, if a ?-Bernstein condition holds, ERM (empirical risk minimization)
1
achieves, in expectation, a better excess risk bound of order O((log T ) ? T ? 2?? ). The bound
interpolates between T ?1/2 for ? = 0 and T ?1 for ? = 1 (Massart condition). Results of Tsybakov
[2004], Massart and N?d?lec [2006], Audibert [2009] suggest that this rate can, in general, not be
improved upon, and exactly this rate is achieved by ERM and various other algorithms in various
settings by e.g. Tsybakov [2004], Audibert [2004, 2009], Bartlett et al. [2006]. By summing from
t = 1 to T and using ERM at each t to classify the next data point (so that ERM becomes FTL,
?
follow-the-leader), this suggests that we can achieve a cumulative expected regret E[RTf ] of order
1??
O((log T ) ? T 2?? ). Theorem 2 shows that this is, indeed, also the rate that Squint attains in such
?
cases if F is countable and the optimal f ? has positive prior mass ? f > 0 (more on this condition
below)? we thus see that Squint obtains exactly the rates one would expect from a statistical
4
learning/classification perspective, and the minimax excess risk results in that setting suggests that
these cumulative regret rates cannot be improved in general. It was shown earlier by Audibert
[2004] that, when equipped with an oracle to tune the learning rate ? as a function of t, the rates
1??
O ((log T ) ? T 2?? ) can also be achieved by Hedge, but the exact tuning depends on the unknown
?. Gr?nwald [2012] provides a means to tune ? automatically in terms of the data, but his method
? like ERM and all algorithms in the?
references above ? may achieve linear regret in worst-case
settings, whereas Squint keeps the O( T ) guarantee for such cases.
?
Theorem 2 only gives the desired rate for Squint with infinite F if F is countable and ? f > 0. The
combination of these two assumptions is strong or at least unnatural, and OCO cannot be readily used
in all such cases either, so in Appendix D we therefore show how to extend Theorem 2 to the case
?
of uncountably infinite F, which can have ? f = 0, as long as F admits sufficiently small entropy
1??
numbers. Incidentally, this also allows us to show that Squint achieves regret rate O ((log T ) ? T 2?? )
when F = ?i=1,2,... Fi is a countably infinite union of Fi with appropriate entropy numbers; in such
cases there can be, at every sample size, a classifier f? ? F with 0 empirical error, so that ERM/FTL
will always over-fit and cannot be used even if the Bernstein condition holds; Squint allows for
aggregation of such models. In the remainder of the main text, we concentrate on applications for
which Theorem 2 can be used directly, without extensions.
3
Examples
We give examples motivating and illustrating the Bernstein condition for the Hedge and OCO settings.
Our examples in the Hedge setting will illustrate Bernstein with ? < 1 and non i.i.d. distributions.
Our OCO examples were chosen to be natural and illustrate fast rates without curvature.
3.1
Hedge Setting: Gap implies Bernstein with ? = 1
In the Hedge setting, we say that a distribution P (not necessarily i.i.d.) of expert losses {`kt ? t, k ? N}
has gap ? > 0 if there is an expert k ? such that
?
E [`kt ?Gt?1 ] + ? ? inf? E [`kt ?Gt?1 ]
almost surely for each round t ? N.
k?k
It is clear that the condition can only hold for k ? the minimiser of the expected loss.
Lemma 3. A distribution with gap ? is ( ?1 , 1)-Bernstein.
Proof. For all k ? k ? and t, we have E [(xkt )2 ?Gt?1 ] ? 1 =
1
?
?
?
1
?
E [xkt ?Gt?1 ] .
?
By Theorem 2 we get the RTk = O(KT ) = O(ln ln T ) rate. Gaillard et al. [2014] show constant
regret for finitely many experts and i.i.d. losses with a gap. Our alternative proof above shows that
neither finiteness nor i.i.d. are essential for fast rates in this case.
3.2
Hedge Setting: Any (1, ?)-Bernstein
The next example illustrates that we can sometimes get the fast rates without a gap. And it also shows
that we can get any intermediate rate: we construct an example satisfying the Bernstein condition for
any ? ? [0, 1] of our choosing (such examples occur naturally in classification settings such as those
considered in the example in Appendix D).
Fix ? ? [0, 1]. Each expert k = 1, 2, . . . is parametrised by a real number ?k ? [0, 1/2]. The only
assumption we make is that ?k = 0 for some k, and inf k {?k ? ?k > 0} = 0. For a concrete example let
2/??1
us choose ?1 = 0 and ?k = 1/k for k = 2, 3, . . . Expert ?k has loss 1/2 ? ?k with probability
2/?
1??k
2
independently between experts and rounds. Expert ?k has mean loss 12 + ?k , and so ?1 = 0 is best,
with loss deterministically equal to 1/2. The squared excess loss of ?k is ?k2 . So we have the Bernstein
condition with exponent ? (but no ?? > ?) and constant 1, and the associated regret rate by Theorem 2.
5
Note that for ? = 0 (the hard case) all experts have mean loss equal to 12 . So no matter which k ?
?
we designate as the best expert our pseudo-regret E[RTk ] is zero. Yet the experts do not agree, as
their losses deviate from 12 independently at random. Hence, by the central limit theorem, with high
?
?
probability our regret RTk is of order T . On the other side of the spectrum, for ? = 1 (the best
case), we do not find a gap. We still have experts arbitrary close to the best expert in mean, but their
expected excess loss squared equals their expected excess loss.
ERM/FTL (and hence all approaches based on it, such as [Bartlett and Mendelson, 2006]) may fail
completely on this type of examples. The clearest case is when {k ? ?k > } is infinite for some > 0.
Then at any t there will be experts that, by chance, incurred their lower loss every round. Picking any
of them will result in expected instantaneous regret at least 2/? , leading to linear regret overall.
The requirement ?k = 0 for some k is essential. If instead ?k > 0 for all k then there is no best expert
in the class. Theorem 19 in Appendix D shows how to deal with this case.
3.3
Hedge Setting: Markov Chains
Suppose we model a binary sequence z1 , z2 , . . . , zT with m-th order Markov chains. As experts we
consider all possible functions f ? {0, 1}m ? {0, 1} that map a history of length m to a prediction
for the next outcome, and the loss of expert f is the 0/1-loss: `ft = ?f (zt?m , . . . , zt?1 ) ? zt ?. (We
m
initialize z1?m ?
= . . . = z0 = 0.) A uniform prior on this finite set of 22 experts results in worst-case
regret of order T 2m . Then, if the data are actually generated by an m-th order Markov chain with
transition probabilities P(zt = 1 ? (zt?m , . . . , zt?1 ) = a) = pa , we have f ? (a) = 1{pa ? 12 } and
E [(xft )2 ?(zt?m , . . . , zt?1 ) = a] = 1,
E [xft ?(zt?m , . . . , zt?1 ) = a] = 2?pa ? 21 ?
for any f such that f (a) ? f ? (a). So the Bernstein condition holds with ? = 1 and B =
3.4
1
1 .
2 mina ?pa ? ?
2
OCO: Hinge Loss on the Unit Ball
Let (x1 , y1 ), (x2 , y2 ), . . . be classification data, with yt ? {?1, +1} and xt ? Rd , and consider the
hinge loss `t (u) = max {0, 1 ? yt ?xt , u?}. Now suppose, for simplicity, that both xt and u come
from the d-dimensional unit Euclidean ball, such that ?xt , u? ? [?1, +1] and hence the hinge is never
active, i.e. `t (u) = 1 ? yt ?xt , u?. Then, if the data turn out to be i.i.d. observations from a fixed
distribution P, the Bernstein condition holds with ? = 1 (the proof can be found in Appendix C):
Lemma 4 (Unregularized Hinge Loss Example). Consider the hinge loss setting above, where
??xt , u?? ? 1. If the data are i.i.d., then the (B, ?)-Bernstein condition is satisfied with ? = 1 and
max
B = 2?
, where ?max is the maximum eigenvalue of E [xx? ] and ? = E[yx], provided that ??? > 0.
???
In particular, if xt is uniformly distributed on the sphere and yt = sign(??
u, xt ?) is the noiseless
classification of xt according to the hyper-plane with normal vector u
?, then B ? ?cd for some absolute
constant c > 0.
The excluded case ??? = 0 only happens in the degenerate case that there is nothing to learn, because
? = 0 implies that the expected hinge loss is 1, its maximal value, for all u.
3.5
OCO: Absolute Loss
Let U = [0, 1] be the unit interval. Consider the absolute loss `t (u) = ?u ? xt ? where xt ? [0, 1] are
drawn i.i.d. from P. Let u? ? arg minu E?u ? x? minimize the expected loss. In this case we may
simplify ?w ? u? , ?`(w)? = (w ? u? ) sign(w ? x). To satisfy the Bernstein condition, we therefore
want B such that, for all w ? [0, 1],
2
?
E [((w ? u? ) sign(w ? x)) ] ? B E [(w ? u? ) sign(w ? x)] .
That is,
?w ? u? ?2?? ? B2? ? P(x ? w) ? 12 ?? .
6
For instance, if the distribution of x has a strictly positive density p(x) ? m > 0, then u? is the
median and ? P(x ? w) ? 12 ? = ? P(x ? w) ? P(x ? u? )? ? m?w ? u? ?, so the condition holds with ? = 1
1
. Alternatively, for a discrete distribution on two points a and b with probabilities p and
and B = 2m
1
1 ? p, the condition holds with ? = 1 and B = ?2p?1?
, provided that p ? 12 , as can be seen by bounding
1
1
?
?w ? u ? ? 1 and ? P(x ? w) ? 2 ? ? ?p ? 2 ?.
4
Proof Ideas
This section builds up to prove our main result Theorem 2. We first introduce the handy ESIabbreviation that allows us to reason simultaneously in expectation and with high probability. We
then provide two alternative characterizations of the Bernstein condition that are equivalent for
bounded losses. Finally, we show how one of these, ESI-Bernstein, combines with individualsequence second-order regret bounds to give rise to Theorem 2.
4.1
Notation: Exponential Stochastic Inequality (ESI, pronounce easy)
Definition 5. A random variable X is exponentially stochastically negative, denoted X ? 0, if
E[eX ] ? 1. For any ? ? 0, we write X ?? 0 if ?X ? 0. For any pair of random variables X and Y ,
the exponential stochastic inequality (ESI) X ?? Y is defined as expressing X ? Y ?? 0; X ? Y is
defined as X ?1 Y .
Lemma 6. Exponential stochastic negativity/inequality has the following useful properties:
1. (Negativity). Let X ? 0. As the notation suggests X is negative in expectation and with high
probability. That is E [X] ? 0 and P {X ? ? ln ?} ? ? for all ? > 0.
2. (Convex combination). Let {X f }f ?F be a family of random variables and let w be a
probability distribution on F. If X f ? 0 for all f then Ef ?w [X f ] ? 0.
3. (Chain rule). Let X1 , X2 , . . . be adapted to filtration G1 ? G2 . . . (i.e. Xt is Gt -measurable
for each t). If Xt ?Gt?1 ? 0 almost surely for all t, then ?Tt=1 Xt ? 0 for all T ? 0.
Proof. Negativity: By Jensen?s inequality E [X] ? ln E [eX ] ? 0, whereas by Markov?s inequality P {X ? ? ln ?} = P {eX ? 1? } ? ? E [eX ] ? ?. Convex combination: By Jensen?s inequality
f
f
E [eEf ?w [X ] ] ? Ef ?w E [eX ] ? 1. Chain rule: By induction. The base case T = 0 holds trivially.
T
T ?1
For T > 0 we have E [e?t=1 Xt ] = E [e?t=1
4.2
Xt
T ?1
E [eXT ?GT ?1 ]] ? E [e?t=1
Xt
] ? 1.
The Bernstein Condition and Second-order Bounds
?
Our main result Theorem 2, bounds the regret RTf compared to the stochastically optimal predictor
f ? when the sequence of losses `1 , `2 , . . . comes from a Bernstein distribution P. For simplicity we
only consider the OCO setting in this sketch. Full details are in Theorem 11. Our starting
? ?point
?
f?
f
? = O( V f KT ).
will be the individual-sequence second-order bound (2), which implies R ? R
T
T
T
?
The crucial technical contribution of this paper is to establish that for Bernstein distributions VTf is
?
? f with high probability. Combination with the individual-sequence bound
bounded in terms of R
T
?
?
? f is bounded in terms of a function of itself. And solving the inequality for R
?f
then gives that R
T
T
?
establishes the fast rates for RTf .
?
?
? f , we look at their relation
To get a first intuition as to why VTf would be bounded in terms of R
T
?
?
? f = ?Tt=1 xft where ft is the prediction of the
in expectation. Recall that VTf = ?Tt=1 (xft t )2 and R
t
T
algorithm in round t. We will bound (xft t )2 in terms of xft t separately for each round t. The Bernstein
Condition 1 for ? = 1 directly yields
?
T
T
t=1
t=1
?
?f ] .
E [VTf ] = ? E [(xft t )2 ] ? B ? E [xft t ] = B E [R
T
7
(4)
For ? < 1 the final step of interchanging expectation and sums does not work directly, but we may use
z ? = ?? (1 ? ?)1?? inf >0 {??1 z + ? } for z ? 0 to rewrite the Bernstein condition as the following
set of linear inequalities:
Condition 7. The excess loss family (3) satisfies the linearized ?-Bernstein condition if there are
constants c1 , c2 > 0 such that we have:
c1 ? 1?? ? E [(xft )2 ?Gt?1 ] ? E [xft ?Gt?1 ] ? c2 ?
a.s. for all > 0, f ? F and t ? N.
This gives the following generalization of (4):
?
?
? f ] + c2 ? T ? .
c1 ? 1?? E [VTf ] ? E [R
T
(5)
Together with the individual sequence regret bound and optimization of this can be used to derive
the in-expectation part of Theorem 2.
?
?
? f in
Getting the in-probability part is more difficult, however, and requires relating VTf and R
T
probability instead of in expectation. Our main technical contribution does exactly this, by showing
that the Bernstein condition is in fact equivalent to the following exponential strengthening of
Condition 7:
Condition 8. The family (3) satisfies the ?-ESI-Bernstein condition if there are c1 , c2 > 0 such that:
(c1 ? 1?? ? (xft )2 ? xft ) ? Gt?1 ?1?? c2 ?
a.s. for all > 0, f ? F and t ? N.
Condition 8 implies Condition 7 by Jensen?s inequality (see Lemma 6 part 1). The surprising converse
is proved in Lemma 9 in the appendix. By telescoping over rounds using the chain rule from Lemma 6,
we see that ESI-Bernstein implies the following substantial strengthening of (5):
?
?
? f ?1?? c2 ? T ?
c1 ? 1?? ? VTf ? R
T
a.s. for all > 0, T ? N.
?
Now the second-order regret bound (2) can be rewritten, using 2 ab = inf ? ?a + b/?, as:
?
?
?
? f ? 2 V f ? ? KT + 2KT ? ? ? V f + KT + 2KT .
for every ? > 0 ? 2R
T
T
T
?
(6)
Plugging in ? = c1 1?? we can chain this inequality with (6) to give, for all > 0,
?
?
? f ?1?? R
? f + c2 ? T ? +
2R
T
T
KT
+ 2KT ,
c1 ? 1??
(7)
1
1??
and both parts of Theorem 2 now follow by rearranging, plugging in the minimiser ? KT2?? T 2?? ,
and using Lemma 6 part 1.
Acknowledgments
Koolen acknowledges support by the Netherlands Organization for Scientific Research (NWO, Veni
grant 639.021.439).
References
J-Y. Audibert. PAC-Bayesian statistical learning theory. PhD thesis, Universit? Paris VI, 2004.
J-Y. Audibert. Fast learning rates in statistical inference through aggregation. Ann. Stat., 37(4), 2009.
P. Bartlett and S. Mendelson. Empirical minimization. Probab. Theory Rel., 135(3):311?334, 2006.
P. Bartlett, M. Jordan, and J. McAuliffe. Convexity, classification, and risk bounds. J. Am. Stat. Assoc., 101
(473):138?156, 2006.
N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice.
Machine Learning, 66(2/3):321?352, 2007.
C. Chiang, T. Yang, C. Le, M. Mahdavi, C. Lu, R. Jin, and S. Zhu. Online optimization with gradual variations.
In Proc. 25th Conf. on Learning Theory (COLT), 2012.
K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weight vectors. In NIPS 22, 2009.
8
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization.
Journal of Machine Learning Research, 12:2121?2159, 2011.
T. van Erven and W. Koolen. MetaGrad: Multiple learning rates in online learning. In Advances in Neural
Information Processing Systems 29, 2016.
T. van Erven, P. Gr?nwald, N. Mehta, M. Reid, and R. Williamson. Fast rates in statistical and online learning.
Journal of Machine Learning Research, 16:1793?1861, 2015.
E. Even-Dar, M. Kearns, Y. Mansour, and J. Wortman. Regret to the best vs. regret to the average. Machine
Learning, 72(1-2), 2008.
Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to
boosting. Journal of Computer and System Sciences, 55:119?139, 1997.
P. Gaillard and S. Gerchinovitz. A chaining algorithm for online nonparametric regression. In Proc. 28th Conf.
on Learning Theory (COLT), 2015.
P. Gaillard, G. Stoltz, and T. van Erven. A second-order bound with excess losses. In Proc. 27th COLT, 2014.
P. Gr?nwald. The safe Bayesian: learning the learning rate via the mixability gap. In ALT ?12. Springer, 2012.
E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine
learning, 80(2-3):165?188, 2010.
V. Koltchinskii. Local Rademacher complexities and oracle inequalities in risk minimization. Ann. Stat., 34(6):
2593?2656, 2006.
W. Koolen. The relative entropy bound for Squint. Blog entry on blog.wouterkoolen.info/, August 2015.
W. Koolen and T. van Erven. Second-order quantile methods for experts and combinatorial games. In Proc.
28th Conf. on Learning Theory (COLT), pages 1155?1175, 2015.
W. Koolen, T. van Erven, and P. Gr?nwald. Learning the learning rate for prediction with expert advice. In
Advances in Neural Information Processing Systems 27, pages 2294?2302, 2014.
H. Luo and R. Schapire. Achieving all with no parameters: Adaptive normalhedge. In Proc. 28th COLT, 2015.
P. Massart and ?. N?d?lec. Risk bounds for statistical learning. Ann. Stat., 34(5):2326?2366, 2006.
B. McMahan and M. Streeter. Adaptive bound optimization for online convex optimization. In Proc. 23rd
Conf. on Learning Theory (COLT), pages 244?256, 2010.
N. Mehta and R. Williamson. From stochastic mixability to fast rates. In NIPS 27, 2014.
F. Orabona, K. Crammer, and N. Cesa-Bianchi. A generalized online mirror descent with applications to
classification and regression. Machine Learning, 99(3):411?435, 2015.
A. Rakhlin and K. Sridharan. Online nonparametric regression. In Proc. 27th COLT, 2014.
S. de Rooij, T. van Erven, P. Gr?nwald, and W. Koolen. Follow the leader if you can, Hedge if you must.
Journal of Machine Learning Research, 15:1281?1316, April 2014.
A. Sani, G. Neu, and A. Lazaric. Exploiting easy data in online optimization. In NIPS 27, 2014.
S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2011.
J. Steinhardt and P. Liang. Adaptivity and optimism: An improved exponentiated gradient algorithm. In Proc.
31th Int. Conf. on Machine Learning (ICML), pages 1593?1601, 2014.
A. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Stat., 32:135?166, 2004.
O. Wintenberger. Optimal learning with Bernstein Online Aggregation. ArXiv:1404.1356, 2015.
9
| 6474 |@word illustrating:2 version:1 norm:2 mehta:2 d2:1 gradual:1 crucially:1 linearized:1 incurs:1 harder:1 interestingly:1 erven:16 z2:1 luo:2 worsening:1 surprising:1 yet:4 must:1 readily:1 gerchinovitz:1 succeeding:1 v:1 selected:1 plane:1 chiang:2 provides:3 characterization:1 boosting:1 along:1 c2:7 prove:3 doubly:1 combine:1 introduce:4 indeed:1 expected:13 nor:2 frequently:1 automatically:5 equipped:1 considering:1 becomes:2 spain:1 provided:3 notation:5 bounded:6 xx:1 mass:3 what:1 kind:1 hindsight:1 guarantee:15 safely:1 pseudo:2 every:4 vtf:7 act:1 friendly:2 certainty:1 fat:1 exactly:3 classifier:2 k2:1 universit:1 control:1 unit:3 converse:1 grant:1 appear:1 assoc:1 reid:1 arguably:1 mcauliffe:1 before:1 positive:2 local:1 limit:1 ext:1 interpolation:1 lugosi:3 koltchinskii:2 suggests:3 fastest:2 ktf:1 pronounce:1 practical:1 acknowledgment:1 union:1 regret:40 handy:1 empirical:3 attain:1 significantly:2 matching:2 convenient:1 pre:1 refers:1 suggest:1 get:7 cannot:3 close:1 wt1:1 risk:9 impossible:1 equivalent:5 map:1 measurable:1 yt:8 kale:2 starting:1 independently:2 convex:12 simplicity:2 rule:3 his:2 variation:2 resp:1 play:2 suppose:2 exact:1 us:1 pa:4 trend:1 satisfying:2 centrum:1 ft:3 worst:9 substantial:1 intuition:1 environment:7 convexity:2 complexity:2 esi:8 depend:1 solving:1 rewrite:1 algebra:1 rtf:7 upon:1 learner:6 sani:2 completely:1 various:2 talk:2 fast:15 hyper:1 shalev:3 refined:1 neighborhood:1 choosing:1 outcome:1 larger:1 say:1 g1:1 itself:1 final:1 online:22 sequence:12 differentiable:1 eigenvalue:1 pdg:1 interaction:1 maximal:1 strengthening:2 adaptation:1 remainder:1 tu:1 combining:1 degenerate:1 achieve:2 getting:1 exploiting:1 optimum:1 requirement:1 rademacher:1 incidentally:1 tim:2 illustrate:3 derive:1 stat:5 finitely:1 minor:1 strong:4 implies:9 come:2 quantify:1 concentrate:1 safe:1 stochastic:23 vc:1 require:1 suffices:1 fix:2 generalization:2 designate:1 extension:3 strictly:1 hold:10 sufficiently:2 considered:1 normal:1 minu:2 achieves:2 niels:1 favorable:1 estimation:1 proc:8 combinatorial:1 nwo:1 gaillard:6 schwarz:1 establishes:1 tool:1 minimization:3 rtu:2 always:3 cwi:3 focus:2 adversarial:3 attains:1 sense:1 am:1 milder:1 inference:1 dependent:1 typically:1 compactness:1 relation:1 expand:1 interested:1 overall:1 classification:9 arg:1 colt:7 denoted:1 exponent:3 development:1 special:1 initialize:1 equal:3 construct:1 never:1 park:1 look:2 icml:1 oco:15 interchanging:1 others:1 simplify:2 dg:2 simultaneously:4 individual:7 eef:1 ab:1 vtu:1 organization:1 interest:1 wouter:1 kt2:2 introduces:1 nl:3 behind:1 parametrised:3 chain:7 kt:26 closer:1 respective:2 minimiser:4 stoltz:2 indexed:1 euclidean:1 desired:1 minimal:1 instance:1 classify:1 earlier:1 cover:1 cost:1 entry:1 predictor:4 uniform:1 wortman:1 gr:6 optimally:1 motivating:1 density:2 picking:1 together:1 concrete:1 squared:3 central:2 cesa:6 satisfied:1 thesis:1 choose:2 worse:1 stochastically:2 conf:5 expert:32 leading:1 mahdavi:1 vtk:2 de:2 attaining:1 b2:1 includes:1 int:1 matter:1 satisfy:1 audibert:8 depends:2 vi:1 hazan:3 start:1 bayes:1 aggregation:4 contribution:2 minimize:1 square:1 variance:2 yield:1 bayesian:2 lu:1 history:1 metagrad:8 reach:1 neu:1 definition:2 clearest:1 naturally:1 proof:10 associated:1 sampled:1 xut:1 proved:1 recall:2 normalhedge:1 actually:2 follow:4 improved:5 april:1 formulation:1 just:1 sketch:3 scientific:1 xkt:3 dredze:1 name:1 y2:1 hence:4 regularization:1 excluded:1 xft:15 deal:1 round:12 game:2 self:1 chaining:1 criterion:1 generalized:2 mina:1 tt:5 theoretic:1 duchi:2 instantaneous:1 novel:1 fi:2 ef:2 common:2 koolen:15 overview:1 exponentially:1 extend:2 relating:1 wmkoolen:1 refer:1 expressing:1 cambridge:1 tuning:3 rd:3 automatic:1 trivially:1 similarly:1 gt:16 base:1 wt2:1 curvature:1 own:1 recent:2 perspective:1 inf:5 certain:2 inequality:12 binary:1 blog:2 vt:2 seen:2 impose:1 timvanerven:1 surely:5 nwald:6 full:4 multiple:2 technical:2 match:1 adapt:5 long:1 sphere:1 plugging:2 controlled:2 prediction:9 regression:5 noiseless:1 expectation:11 arxiv:1 sometimes:2 achieved:4 c1:8 whereas:2 ftl:3 want:1 separately:1 interval:1 median:1 finiteness:1 crucial:1 massart:5 sridharan:1 jordan:1 call:1 extracting:1 yang:1 bernstein:39 intermediate:1 enough:1 easy:3 variety:4 fit:1 idea:2 whether:1 optimism:1 bartlett:7 unnatural:1 wiskunde:1 peter:1 interpolates:1 action:7 dar:2 useful:1 clear:1 tune:2 netherlands:3 nonparametric:2 tsybakov:5 informatica:1 simplest:1 reduced:1 schapire:4 diameter:1 exist:1 fancy:1 sign:4 rtk:5 arising:1 lazaric:1 write:2 discrete:1 key:1 four:1 rooij:2 veni:1 drawn:4 achieving:1 neither:1 breadth:1 subgradient:1 sum:1 uncertainty:1 you:2 place:1 family:7 almost:5 decision:2 appendix:8 scaling:1 bound:33 lec:3 oracle:2 adapted:1 occur:1 x2:2 argument:1 extremely:1 relatively:1 according:1 popularized:1 combination:4 ball:2 smaller:1 happens:1 wtk:1 erm:7 unregularized:1 ln:17 agree:1 turn:2 fail:1 singer:1 available:1 generalizes:1 rewritten:1 generic:1 appropriate:2 robustly:1 batch:1 alternative:2 ktk:3 existence:1 uncountable:1 include:1 hinge:7 yx:1 quantile:1 build:2 establish:1 mixability:2 question:2 quantity:1 rt:1 amongst:1 gradient:4 cauchy:1 considers:1 reason:1 induction:1 length:1 liang:2 setup:2 difficult:1 info:1 sigma:1 negative:2 rise:1 filtration:1 design:1 countable:2 zt:11 squint:16 unknown:1 perform:2 bianchi:6 upper:2 observation:1 markov:4 finite:2 jin:1 descent:1 misspecification:1 y1:1 mansour:2 arbitrary:1 august:1 pair:1 paris:1 z1:2 learned:1 diction:1 barcelona:1 nip:4 able:1 adversary:1 beyond:1 usually:1 below:1 regime:2 kulesza:1 including:1 max:3 natural:2 rely:1 quantification:1 telescoping:1 zhu:1 minimax:2 imply:3 xg:1 acknowledges:1 negativity:3 text:1 review:1 prior:4 deviate:1 probab:1 relative:2 freund:2 encompassing:1 loss:59 expect:2 adaptivity:3 interesting:1 leiden:3 foundation:1 incurred:1 story:1 playing:1 cd:1 uncountably:1 friendliness:1 side:1 exponentiated:1 taking:1 absolute:5 van:16 distributed:1 transition:1 cumulative:3 author:1 made:1 adaptive:6 excess:13 compact:1 obtains:1 countably:2 keep:2 ml:1 active:1 reveals:2 summing:1 leader:2 shwartz:3 alternatively:1 spectrum:1 continuous:1 prod:1 decade:1 streeter:2 tailed:1 why:1 learn:1 ca:1 rearranging:1 williamson:2 necessarily:3 protocol:2 main:7 bounding:1 nothing:1 customarily:1 x1:2 advice:3 deployed:1 slow:1 sub:1 explicit:1 deterministically:1 exponential:5 mcmahan:2 candidate:1 theorem:22 z0:1 specific:1 xt:22 showing:2 pac:1 jensen:3 explored:1 rakhlin:1 admits:1 alt:1 mendelson:5 essential:2 rel:1 sequential:1 importance:1 mirror:1 phd:1 illustrates:1 margin:2 gap:9 entropy:4 logarithmic:2 infinitely:1 steinhardt:2 amsterdam:1 expressed:1 g2:2 applies:2 springer:1 satisfies:3 chance:1 hedge:19 goal:3 ann:4 orabona:2 price:1 hard:1 infinite:5 uniformly:1 wt:14 wintenberger:2 lemma:7 called:3 kearns:1 support:1 crammer:3 evaluate:1 ex:5 |
6,052 | 6,475 | A scalable end-to-end Gaussian process adapter for
irregularly sampled time series classification
Steven Cheng-Xian Li
Benjamin Marlin
College of Information and Computer Sciences
University of Massachusetts Amherst
Amherst, MA 01003
{cxl,marlin}@cs.umass.edu
Abstract
We present a general framework for classification of sparse and irregularly-sampled
time series. The properties of such time series can result in substantial uncertainty
about the values of the underlying temporal processes, while making the data
difficult to deal with using standard classification methods that assume fixeddimensional feature spaces. To address these challenges, we propose an uncertaintyaware classification framework based on a special computational layer we refer to
as the Gaussian process adapter that can connect irregularly sampled time series
data to any black-box classifier learnable using gradient descent. We show how
to scale up the required computations based on combining the structured kernel
interpolation framework and the Lanczos approximation method, and how to
discriminatively train the Gaussian process adapter in combination with a number
of classifiers end-to-end using backpropagation.
1
Introduction
In this paper, we propose a general framework for classification of sparse and irregularly-sampled
time series. An irregularly-sampled time series is a sequence of samples with irregular intervals
between their observation times. These intervals can be large when the time series are also sparsely
sampled. Such time series data are studied in various areas including climate science [22], ecology
[4], biology [18], medicine [15] and astronomy [21]. Classification in this setting is challenging both
because the data cases are not naturally defined in a fixed-dimensional feature space due to irregular
sampling and variable numbers of samples, and because there can be substantial uncertainty about
the underlying temporal processes due to the sparsity of observations.
Recently, Li and Marlin [13] introduced the mixture of expected Gaussian kernels (MEG) framework,
an uncertainty-aware kernel for classifying sparse and irregularly sampled time series. Classification
with MEG kernels is shown to outperform models that ignore uncertainty due to sparse and irregular
sampling. On the other hand, various deep learning models including convolutional neural networks
[12] have been successfully applied to fields such as computer vision and natural language processing,
and have been shown to achieve state-of-the-art results on various tasks. Some of these models
have desirable properties for time series classification, but cannot be directly applied to sparse and
irregularly sampled time series.
Inspired by the MEG kernel, we propose an uncertainty-aware classification framework that enables
learning black-box classification models from sparse and irregularly sampled time series data. This
framework is based on the use of a computational layer that we refer to as the Gaussian process
(GP) adapter. The GP adapter uses Gaussian process regression to transform the irregular time series
data into a uniform representation, allowing sparse and irregularly sampled data to be fed into any
black-box classifier learnable using gradient descent while preserving uncertainty. However, the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
O(n3 ) time and O(n2 ) space of exact GP regression makes the GP adapter prohibitively expensive
when scaling up to large time series.
To address this problem, we show how to speed up the key computation of sampling from a GP
posterior based on combining the structured kernel interpolation (SKI) framework that was recently
proposed by Wilson and Nickisch [25] with Lanczos methods for approximating matrix functions [3].
Using the proposed sampling algorithm, the GP adapter can run in linear time and space in terms of
the length of the time series, and O(m log m) time when m inducing points are used.
We also show that GP adapter can be trained end-to-end together with the parameters of the chosen
classifier by backpropagation through the iterative Lanczos method. We present results using logistic
regression, fully-connected feedforward networks, convolutional neural networks and the MEG kernel.
We show that end-to-end discriminative training of the GP adapter outperforms a variety of baselines
in terms of classification performance, including models based only on GP mean interpolation, or
with GP regression trained separately using marginal likelihood.
2
Gaussian processes for sparse and irregularly-sampled time series
Our focus in this paper is on time series classification in the presence of sparse and irregular sampling.
In this problem, the data D contain N independent tuples consisting of a time series Si and a label
yi . Thus, D = {(S1 , y1 ), . . . , (SN , yN )}. Each time series Si is represented as a list of time points
ti = [ti1 , . . . , ti|Si | ]> , and a list of corresponding values vi = [vi1 , . . . , vi|Si | ]> . We assume that
each time series is observed over a common time interval [0, T ]. However, different time series
are not necessarily observed at the same time points (i.e. ti 6= tj in general). This implies that the
number of observations in different time series is not necessary the same (i.e. |Si | 6= |Sj | in general).
Furthermore, the time intervals between observation within a single time series are not assumed to be
uniform.
Learning in this setting is challenging because the data cases are not naturally defined in a fixeddimensional feature space due to the irregular sampling. This means that commonly used classifiers
that take fixed-length feature vectors as input are not applicable. In addition, there can be substantial
uncertainty about the underlying temporal processes due to the sparsity of observations.
To address these challenges, we build on ideas from the MEG kernel [13] by using GP regression
[17] to provide an uncertainty-aware representation of sparse and irregularly sampled time series. We
fix a set of reference time points x = [x1 , . . . , xd ]> and represent a time series S = (t, v) in terms
of its posterior marginal distribution at these time points. We use GP regression with a zero-mean
GP prior and a covariance function k(?, ?) parameterized by kernel hyperparameters ?. Let ? 2 be the
independent noise variance of the GP regression model. The GP parameters are ? = (?, ? 2 ).
Under this model, the marginal posterior GP at x is Gaussian distributed with the mean and covariance
given by
? = Kx,t (Kt,t + ? 2 I)?1 v,
(1)
2
? = Kx,x ? Kx,t (Kt,t + ? I)
?1
Kt,x
(2)
where Kx,t denotes the covariance matrix with [Kx,t ]ij = k(xi , tj ). We note that it takes O(n3 +nd)
time to exactly compute the posterior mean ?, and O(n3 + n2 d + nd2 ) time to exactly compute the
full posterior covariance matrix ?, where n = |t| and d = |x|.
3
The GP adapter and uncertainty-aware time series classification
In this section we describe our framework for time series classification in the presence of sparse
and irregular sampling. Our framework enables any black-box classifier learnable by gradient-based
methods to be applied to the problem of classifying sparse and irregularly sampled time series.
3.1
Classification frameworks and the Gaussian process adapter
In Section 2 we described how we can represent a time series through the marginal posterior it induces
under a Gaussian process regression model at any set of reference time points x. By fixing a common
2
set of reference time points x for all time series in a data set, every time series can be transformed
into a common representation in the form of a multivariate Gaussian N (z|?, ?; ?) with z being the
random vector distributed according to the posterior GP marginalized over the time points x.1 Here
we assume that the GP parameters ? are shared across the entire data set.
If the z values were observed, we could simply apply a black-box classifier. A classifier can be
generally defined by a mapping function f (z; w) parameterized by w, associated with a loss function
`(f (z; w), y) where y is a label value from the output space Y. However, in our case z is a Gaussian
random variable, which means `(f (z; w), y) is nowitself a random variable given a label y. Therefore,
we use the expectation Ez?N (?,?;?) `(f (z; w), y) as the overall loss between the label y and a time
series S given its Gaussian representation N (?, ?; ?). The learning problem becomes minimizing
the expected loss over the entire data set:
w? , ? ? = argmin
w,?
N
X
Ezi ?N (?i ,?i ;?) `(f (zi ; w), yi ) .
(3)
i=1
Once we have the optimal parameters w? and ? ? , we can make predictions on unseen data. In
general, given an unseen time series S and its Gaussian representation N (?, ?; ? ? ), we can predict
its label using (4), although in many cases this can be simplified into a function of f (z; w? ) with the
expectation taken on or inside of f (z; w? ).
y ? = argmin Ez?N (?,?;?? ) `(f (z; w? ), y)
(4)
y?Y
We name the above approach the Uncertainty-Aware Classification (UAC) framework. Importantly,
this framework propagates the uncertainty in the GP posterior induced by each time series all the way
through to the loss function. Besides, we call the transformation S 7? (?, ?) the Gaussian process
adapter, since it provides a uniform representation to connect the raw irregularly sampled time series
data to a black-box classifier.
Variations of the UAC framework can be derived by taking the expectation at various position of
f (z; w) where z ? N (?, ?; ?). Taking the expectation at an earlier stage simplifies the computation,
but the uncertainty information will be integrated out earlier as well.2 In the extreme case, if the
expectation is computed immediately followed by the GP adapter transformation, it is equivalent to
using a plug-in estimate ? for z in the loss function, `(f (Ez?N (?,?;?) [z]; w), y) = `(f (?; w), y).
We refer to this as the IMPutation (IMP) framework. The IMP framework discards the uncertainty
information completely, which further simplifies the computation. This simplified variation may be
useful when the time series are more densely sampled, where the uncertainty is less of a concern.
In practice, we can train the model using the UAC objective (3) and predict instead by IMP. In that
case, the predictions would be deterministic and can be computed efficiently without drawing samples
from the posterior GP as described later in Section 4.
3.2
Learning with the GP adapter
In the previous section, we showed that the UAC framework can be trained using (3). In this paper,
we use stochastic gradient descent to scalably optimize (3) by updating the model using a single time
series at a time, although it can be easily modified for batch or mini-batch updates.
From now on,
we will focus on the optimization problem minw,? Ez?N (?,?;?) `(f (z; w), y) where ?, ? are the
output of the GP adapter given
a time series
S = (t, v) and its label y. For many classifiers, the
expected loss Ez?N (?,?;?) `(f (z; w), y) cannot be analytically computed. In such cases, we use
the Monte Carlo average to approximate the expected loss:
S
1X
Ez?N (?,?;?) `(f (z; w), y) ?
`(f (zs ; w), y),
S s=1
where zs ? N (?, ?; ?).
(5)
To learn the parameters of both the classifier w and the Gaussian process regression model ? jointly
under the expected loss, we need to be able to compute the gradient of the expectation given in (5).
1
The notation N (?, ?; ?) explicitly expresses that both ? and ? are functions of the GP parameters ?.
Besides, they are also functions of S = (t, v) as shown in (1) and (2).
2
For example, the loss of the expected output of the classifier `(Ez?N (?,?;?) [f (z; w)], y).
3
To achieve this, we reparameterize the Gaussian random variable using the identity z = ? + R?
where ? ? N (0, I) and R satisfies ? = RR> [11]. The gradients under this reparameterization
are given below, both of which can be approximated using Monte Carlo sampling as in (5). We will
focus on efficiently computing the gradient shown in (7) since we assume that the gradient of the
base classifier f (z; w) can be computed efficiently.
?
?
Ez?N (?,?;?) `(f (z; w), y) = E??N (0,I)
`(f (z; w), y)
(6)
?w
?w
"
#
X ?`(f (z; w), y) ?zi
?
Ez?N (?,?;?) `(f (z; w), y) = E??N (0,I)
(7)
??
?zi
??
i
There are several choices for R that satisfy ? = RR> . One common choice of R is the Cholesky
factor, a lower triangular matrix, which can be computed using Cholesky decomposition in O(d3 ) for
1
a d ? d covariance matrix ? [7]. We instead use the symmetric matrix square root R = ? /2 . We
will show that this particular choice of R leads to an efficient and scalable approximation algorithm
in Section 4.2.
4
Fast sampling from posterior Gaussian processes
The computation required by the GP adapter is dominated by the time needed to draw samples from
1
the marginal GP posterior using z = ? + ? /2 ?. In Section 2 we noted that the time complexity of
exactly computing the posterior mean ? and covariance ? is O(n3 + nd) and O(n3 + n2 d + nd2 ),
respectively. Once we have both ? and ? we still need to compute the square root of ?, which
requires an additional O(d3 ) time to compute exactly. In this section, we show how to efficiently
generate samples of z.
4.1
Structured kernel interpolation for approximating GP posterior means
The main idea of the structured kernel interpolation (SKI) framework recently proposed by Wilson
e a,b
and Nickisch [25] is to approximate a stationary kernel matrix Ka,b by the approximate kernel K
>
defined below where u = [u1 , . . . , um ] is a collection of evenly-spaced inducing points.
e a,b = Wa Ku,u W> .
Ka,b ? K
b
(8)
Letting p = |a| and q = |b|, Wa ? Rp?m is a sparse interpolation matrix where each row
contains only a small number of non-zero entries. We use local cubic convolution interpolation
(cubic interpolation for short) [10] as suggested in Wilson and Nickisch [25]. Each row of the
interpolation matrices Wa , Wb has at most four non-zero entries. Wilson and Nickisch [25] showed
that when the kernel is locally smooth (under the resolution of u), cubic interpolation results in
accurate approximation. This can be justified as follows: with cubic interpolation, the SKI kernel is
essentially the two-dimensional cubic interpolation of Ka,b using the exact regularly spaced samples
e a,b
stored in Ku,u , which corresponds to classical bicubic convolution. In fact, we can show that K
asymptotically converges to Ka,b as m increases by following the derivation in Keys [10].
Plugging the SKI kernel into (1), the posterior GP mean evaluated at x can be approximated by
?1
>
2 ?1
? = Kx,t Kt,t + ? 2 I
v ? Wx Ku,u Wt> Wt K?1
v.
(9)
u,u Wt + ? I
The inducing points u are chosen to be evenly-spaced because Ku,u forms a symmetric Toeplitz
matrix under a stationary covariance function. A symmetric Toeplitz matrix can be embedded into a
circulant matrix to perform matrix vector multiplication using fast Fourier transforms [7].
>
2 ?1
Further, one can use the conjugate gradient method to solve for (Wt K?1
v which only
u,u Wt +? I)
?1
>
2
involves computing the matrix-vector product (Wt Ku,u Wt + ? I)v. In practice, the conjugate
gradient method converges within only a few iterations. Therefore, approximating the posterior mean
? using SKI takes only O(n + d + m log m) time to compute. In addition, since a symmetric Toeplitz
matrix Ku,u can be uniquely characterized by its first column, and Wt can be stored as a sparse
matrix, approximating ? requires only O(n + d + m) space.
4
Algorithm 1: Lanczos method for approximating ? /2 ?
Input: covariance matrix ?, dimension of the Krylov subspace k, random vector ?
?1 = 0 and d0 = 0
d1 = ?/k?k
for j = 1 to k do
d = ?dj ? ?j dj?1
?
?
? 1 ?2
?j = d>
j d
? ?2 ?2 ?3
?
d = d ? ?j dj
?
?
..
?
?
.
?3 ?3
?j+1 = kdk
H = tridiagonal(?, ?, ?) = ?
?
.. ..
?
dj+1 = d/?j+1
.
. ?k ?
D = [d1 , . . . , dk ]
?k ?k
H = tridiagonal(?, ?, ?)
return k?kDH1/2 e1
// e1 = [1, 0, . . . , 0]>
1
4.2
The Lanczos method for covariance square root-vector products
With the SKI techniques, although we can efficiently approximate the posterior mean ?, computing
1
? /2 ? is still challenging. If computed exactly, it takes O(n3 + n2 d + nd2 ) time to compute ? and
O(d3 ) time to take the square root. To overcome the bottleneck, we apply the SKI kernel to the
Lanczos method, one of the Krylov subspace approximation methods, to speed up the computation
1
1
of ? /2 ? as shown in Algorithm 1. The advantage of the Lanczos method is that neither ? nor ? /2
needs to be computed explicitly. Like the conjugate gradient method, another example of the Krylov
subspace method, it only requires the computation of matrix-vector products with ? as the matrix.
The idea of the Lanczos method is to approximate ? /2 ? in the Krylov subspace Kk (?, ?) =
span{?, ??, . . . , ?k?1 ?}. The iteration in Algorithm 1, usually referred to the Lanczos process,
essentially performs the Gram-Schmidt process to transform the basis {?, ??, . . . , ?k?1 ?} into an
orthonormal basis {d1 , . . . , dk } for the subspace Kk (?, ?).
1
The optimal approximation of ? /2 ? in the Krylov subspace Kk (?, ?) that minimizes the `2 -norm
1
1
of the error is the orthogonal projection of ? /2 ? onto Kk (?, ?) as y? = DD> ? /2 ?. Since we
?
> 1/2
choose d1 = ?/k?k, the optimal projection can be written as y = k?kDD ? De1 where
e1 = [1, 0, . . . , 0]> is the first column of the identify matrix.
1
One can show that the tridiagonal matrix H defined in Algorithm 1 satisfies D> ?D = H [20]. Also,
1
we have D> ? /2 D ? (D> ?D)1/2 since the eigenvalues of H approximate the extremal eigenvalues
1
of ? [19]. Therefore we have y? = k?kDD> ? /2 De1 ? k?kDH1/2 e1 .
The error bound of the Lanczos method is analyzed in Ili?c et al. [9]. Alternatively one can show that
the Lanczos approximation converges superlinearly [16]. In practice, for a d ? d covariance matrix
?, the approximation is sufficient for our sampling purpose with k d. As H is now a k ? k matrix,
we can use any standard method to compute its square root in O(k 3 ) time [2], which is considered
O(1) when k is chosen to be a small constant. Now the computation of the Lanczos method for
1
approximating ? /2 ? is dominated by the matrix-vector product ?d during the Lanczos process.
Here we apply the SKI kernel trick again to efficiently approximate ?d by
?1
?d ? Wx Ku,u Wx> d ? Wx Ku,u Wt> Wt Ku,u Wt> + ? 2 I
Wt Ku,u Wx> d.
(10)
Similar to the posterior mean, ?d can be approximated in O(n + d + m log m) time and linear space.
Therefore, for k = O(1) basis vectors, the entire Algorithm 1 takes O(n + d + m log m) time and
O(n + d + m) space, which is also the complexity to draw a sample from the posterior GP.
To reduce the variance when estimating the expected loss (5), we can draw multiple samples from the
1
posterior GP: {? /2 ? s }s=1,...,S where ? s ? N (0, I). Since all of the samples are associated with the
same covariance matrix ?, we can use the block Lanczos process [8], an extension to the single-vector
1
Lanczos method presented in Algorithm 1, to simultaneously approximate ? /2 ? for all S random
5
vectors ? = [? 1 , . . . , ? S ]. Similarly, during the block Lanczos process, we use the block conjugate
gradient method [6, 5] to simultaneously solve the linear equation (Wt Ku,u Wt> + ? 2 I)?1 ? for
multiple ?.
5
End-to-end learning with the GP adapter
The most common way to train GP parameters is through maximizing the marginal likelihood [17]
n
?1
1
1
(11)
v ? log Kt,t + ? 2 I ? log 2?.
log p(v|t, ?) = ? v> Kt,t + ? 2 I
2
2
2
If we follow this criterion, training the UAC framework becomes a two-stage procedure: first we
learn GP parameters by maximizing the marginal likelihood. We then compute ? and ? given each
time series S and the learned GP parameters ? ? . Both ? and ? are then fixed and used to train the
classifier using (6).
In this section, we describe how to instead train the GP parameters discriminatively end-to-end using
backpropagation. As mentioned in Section 3, we train the UAC framework by jointly optimizing the
GP parameters ? and the parameters of the classifier w according to (6) and (7).
The most challenging part in (7) is to compute ?z = ?? + ?(? /2 ?).3 For ??, we can derive the
gradient of the approximating posterior mean (9) as given in Appendix A. Note that the gradient ??
can be approximated efficiently by repeatedly applying fast Fourier transforms and the conjugate
gradient method in the same time and space complexity as computing (9).
1
On the other hand, ?(? /2 ?) can be approximated by backpropagating through the Lanczos method
described in Algorithm 1. To carry out backpropagation, all operations in the Lanczos method must
be differentiable. For the approximation of ?d during the Lanczos process, we can similarly compute
the gradient of (10) efficiently using the SKI techniques as in computing ?? (see Appendix A).
1
The gradient ?H1/2 for the last step of Algorithm 1 can be derived as follows. From H = H1/2 H1/2 ,
we have ?H = (?H1/2 )H1/2 + H1/2 (?H1/2 ). This is known as the Sylvester equation, which has
the form of AX + XB = C where A, B, C are matrices and X is the unknown matrix to solve
for. We can compute the gradient ?H1/2 by solving the Sylvester equation using the Bartels-Stewart
algorithm [1] in O(k 3 ) time for a k ? k matrix H, which is considered O(1) for a small constant k.
Overall, training the GP adapter using stochastic optimization with the aforementioned approach
takes O(n + d + m log m) time and O(n + d + m) space for m inducing points, n observations in
the time series, and d features generated by the GP adapter.
6
Related work
The recently proposed mixtures of expected Gaussian kernels (MEG) [13] for classification of
irregular time series is probably
pthe closest work to
ours. The random
feature representation of the
MEG kernel is in the form of 2/m Ez?N (?,?) cos(wi> z + bi ) , which the algorithm described
in Section 4 can be applied to directly. However, by exploiting the spectral property of Gaussian
kernels,
the expected random feature of the MEG kernel is shown to be analytically computable by
p
2/m exp(?wi> ?wi /2) cos(wi> ? + bi ). With the SKI techniques, we can efficiently approximate
both wi> ?wi and wi> ? in the same time and space complexity as the GP adapter. Moreover, the
random features of the MEG kernel can be viewed as a stochastic layer in the classification network,
with no trainable parameters. All {wi , bi }i=1,...,m are randomly initialized once in the beginning and
associated with the output of the GP adapter in a nonlinear way described above.
Moreover, the MEG kernel classification is originally a two-stage method: one first estimates the
GP parameters by maximizing the marginal likelihood and then uses the optimized GP parameters
to compute the MEG kernel for classification. Since the random feature is differentiable, with the
approximation of ?? and ?(?d) described in Section 5, we can form a similar classification network
that can be efficiently trained end-to-end using the GP adapter. In Section 7.2, we will show that
training the MEG kernel end-to-end leads to better classification performance.
3
For brevity, we drop 1/?? from the gradient notation in this section.
6
time (log scale)
error
error
length of time series:
1000
2000
3000
101
100
100
10?1
10?1
10?2
10?2
10?3
10?3
3 4 5 6 7 8 9 10
0
5 10 15 20
# Lanczos iterations
log2 (# inducing points)
103
102
101
100
exact
exact BP
Lanczos
Lanczos BP
5
10 15 20 25 30
length of time series (?100)
Figure 1: Left: Sample approximation error versus the number of inducing points. Middle: Sample
approximation error versus the number of Lanczos iterations. Right: Running time comparisons (in
seconds). BP denotes computing the gradient of the sample using backpropagation.
7
Experiments
In this section, we present experiments and results exploring several facets of the GP adapter
framework including the quality of the approximations and the classification performance of the
framework when combined with different base classifiers.
7.1
Quality of GP sampling approximations
The key to scalable learning with the GP adapter relies on both fast and accurate approximation
for drawing samples from the posterior GP. To assess the approximation quality, we first generate
a synthetic sparse and irregularly-sampled time series S by sampling from a zero-mean Gaussian
process at random time points. We use the squared exponential kernel k(ti , tj ) = a exp(?b(ti ? tj )2 )
with randomly chosen hyperparameters. We then infer ? and ? at some reference x given S. Let e
z
1
denote our approximation of z = ? + ? /2 ?. In this experiment, we set the output size z to be |S|,
that is, d = n. We evaluate the approximation quality by assessing the error ke
z ? zk computed with
a fixed random vector ?.
The leftmost plot in Figure 1 shows the approximation error under different numbers of inducing
points m with k = 10 Lanczos iterations. The middle plot compares the approximation error as the
number of Lanczos iterations k varies, with m = 256 inducing points. These two plots show that the
approximation error drops as more inducing points and Lanczos iterations are used. In both plots,
the three lines correspond to different sizes for z: 1000 (bottom line), 2000 (middle line), 3000 (top
line). The separation between the curves is due to the fact that the errors are compared under the
same number of inducing points. Longer time series leads to lower resolution of the inducing points
and hence the higher approximation error.
Note that the approximation error comes from both the cubic interpolation and the Lanczos method.
Therefore, to achieve a certain normalized approximation error across different data sizes, we should
simultaneously use more inducing points and Lanczos iterations as the data grows. In practice, we
find that k ? 3 is sufficient for estimating the expected loss for classification.
The rightmost plot in Figure 1 compares the time to draw a sample using exact computation versus
the approximation method described in Section 4 (exact and Lanczos in the figure). We also compare
the time to compute the gradient with respect to the GP parameters by both the exact method and
the proposed approximation (exact BP and Lanczos BP in the figure) because this is the actual
computation carried out during training. In this part of the experiment, we use k = 10 and m = 256.
The plot shows that Lanczos approximation with the SKI kernel yields speed-ups of between 1 and
3 orders of magnitude. Interestingly, for the exact approach, the time for computing the gradient
roughly doubles the time of drawing samples. (Note that time is plotted in log scale.) This is because
computing gradients requires both forward and backward propagation, whereas drawing samples
corresponds to only the forward pass. Both the forward and backward passes take roughly the same
computation in the exact case. However, the gap is relatively larger for the approximation approach
due to the recursive relationship of the variables in the Lanczos process. In particular, dj is defined
recursively in terms of all of d1 , . . . , dj?1 , which makes the backpropagation computation more
complicated than the forward pass.
7
Table 1: Comparison of classification accuracy (in percent). IMP and UAC refer to the loss functions
for training described in Section 3.1, and we use IMP predictions throughout. Although not belonging
to the UAC framework, we put the MEG kernel in UAC since it is also uncertainty-aware.
7.2
LogReg
MLP
ConvNet
MEG kernel
Marginal likelihood
IMP
UAC
77.90
78.23
85.49
87.05
87.61
88.17
?
84.82
End-to-end
IMP
UAC
79.12
79.24
86.49
87.95
89.84
91.41
?
86.61
Classification with GP adapter
In this section, we evaluate the performance of classifying sparse and irregularly-sampled time series
using the UAC framework. We test the framework on the uWave data set,4 a collection of gesture
samples categorized into eight gesture patterns [14]. The data set has been split into 3582 training
instances and 896 test instances. Each time series contains 945 fully observed samples. Following
the data preparation procedure in the MEG kernel work [13], we randomly sample 10% of the
observations from each time series to simulate the sparse and irregular sampling scenario. In this
experiment, we use the squared exponential covariance function k(ti , tj ) = a exp(?b(ti ? tj )2 ) for
a, b > 0. Together with the independent noise parameter ? 2 > 0, the GP parameters are {a, b, ? 2 }.
To bypass the positive constraints on the GP parameters, we reparameterize them by {?, ?, ?} such
that a = e? , b = e? , and ? 2 = e? .
To demonstrate that the GP adapter is capable of working with various classifiers, we use the UAC
framework to train three different classifiers: a multi-class logistic regression (LogReg), a fullyconnected feedforward network (MLP), and a convolutional neural network (ConvNet). The detailed
architecture of each model is described in Appendix C.
We use m = 256 inducing points, d = 254 features output by the GP adapter, k = 5 Lanczos
iterations, and S = 10 samples. We split the training set into two partitions: 70% for training and
30% for validation. We jointly train the classifier with the GP adapter using stochastic gradient
descent with Nesterov momentum. We apply early stopping based on the validation set. We also
compare to classification with the MEG kernel implemented using our GP adapter as described in
Section 6. We use 1000 random features trained with multi-class logistic regression.
Table 1 shows that among all three classifiers, training GP parameters discriminatively always leads
to better accuracy than maximizing the marginal likelihood. This claim also holds for the results
using the MEG kernel. Further, taking the uncertainty into account by sampling from the posterior
GP always outperforms training using only the posterior means. Finally, we can also see that the
classification accuracy improves as the model gets deeper.
8
Conclusions and future work
We have presented a general framework for classifying sparse and irregularly-sampled time series
and have shown how to scale up the required computations using a new approach to generating
approximate samples. We have validated the approximation quality, the computational speed-ups,
and the benefit of the proposed approach relative to existing baselines.
There are many promising directions for future work including investigating more complicated
covariance functions like the spectral mixture kernel [24], different classifiers including the encoder
LSTM [23], and extending the framework to multi-dimensional time series and GPs with multidimensional index sets (e.g., for spatial data). Lastly, the GP adapter can also be applied to other
problems such as dimensionality reduction by combining it with an autoencoder.
Acknowledgements
This work was supported by the National Science Foundation under Grant No. 1350522.
4
The data set UWaveGestureLibraryAll is available at http://timeseriesclassification.com.
8
References
[1] Richard H. Bartels and GW Stewart. Solution of the matrix equation AX + XB = C. Communications
of the ACM, 15(9):820?826, 1972.
[2] ?ke Bj?rck and Sven Hammarling. A Schur method for the square root of a matrix. Linear algebra and its
applications, 52:127?140, 1983.
[3] Edmond Chow and Yousef Saad. Preconditioned krylov subspace methods for sampling multivariate
gaussian distributions. SIAM Journal on Scientific Computing, 36(2):A588?A608, 2014.
[4] J.S. Clark and O.N. Bj?rnstad. Population time series: process variability, observation errors, missing
values, lags, and hidden states. Ecology, 85(11):3140?3150, 2004.
[5] Augustin A Dubrulle. Retooling the method of block conjugate gradients. Electronic Transactions on
Numerical Analysis, 12:216?233, 2001.
[6] YT Feng, DRJ Owen, and D Peri?c. A block conjugate gradient method applied to linear systems with
multiple right-hand sides. Computer methods in applied mechanics and engineering, 1995.
[7] Gene H Golub and Charles F Van Loan. Matrix computations, volume 3. JHU Press, 2012.
[8] Gene Howard Golub and Richard Underwood. The block Lanczos method for computing eigenvalues.
Mathematical software, 3:361?377, 1977.
[9] M Ili?c, Ian W Turner, and Daniel P Simpson. A restarted Lanczos approximation to functions of a
symmetric matrix. IMA journal of numerical analysis, page drp003, 2009.
[10] Robert G Keys. Cubic convolution interpolation for digital image processing. Acoustics, Speech and Signal
Processing, IEEE Transactions on, 29(6):1153?1160, 1981.
[11] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014.
[12] Yann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In Proceedings of Computer Vision and Pattern Recognition (CVPR), 2004.
[13] Steven Cheng-Xian Li and Benjmain M. Marlin. Classification of sparse and irregularly sampled time
series with mixtures of expected Gaussian kernels and random features. In 31st Conference on Uncertainty
in Artificial Intelligence, 2015.
[14] Jiayang Liu, Lin Zhong, Jehan Wickramasuriya, and Venu Vasudevan. uwave: Accelerometer-based
personalized gesture recognition and its applications. Pervasive and Mobile Computing, 2009.
[15] Benjamin M. Marlin, David C. Kale, Robinder G. Khemani, and Randall C. Wetzel. Unsupervised pattern
discovery in electronic health care data using probabilistic clustering models. In Proceedings of the 2nd
ACM SIGHIT International Health Informatics Symposium, pages 389?398, 2012.
[16] Beresford N Parlett. The symmetric eigenvalue problem, volume 7. SIAM, 1980.
[17] Carl Edward Rasmussen. Gaussian processes for machine learning. 2006.
[18] T. Ruf. The lomb-scargle periodogram in biological rhythm research: analysis of incomplete and unequally
spaced time-series. Biological Rhythm Research, 30(2):178?201, 1999.
[19] Yousef Saad. On the rates of convergence of the Lanczos and the block-Lanczos methods. SIAM Journal
on Numerical Analysis, 17(5):687?706, 1980.
[20] Yousef Saad. Iterative methods for sparse linear systems. Siam, 2003.
[21] Jeffrey D Scargle. Studies in astronomical time series analysis. ii-statistical aspects of spectral analysis of
unevenly spaced data. The Astrophysical Journal, 263:835?853, 1982.
[22] M. Schulz and K. Stattegger. Spectrum: Spectral analysis of unevenly spaced paleoclimatic time series.
Computers & Geosciences, 23(9):929?945, 1997.
[23] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In
Advances in neural information processing systems, pages 3104?3112, 2014.
[24] Andrew Gordon Wilson and Ryan Prescott Adams. Gaussian process kernels for pattern discovery and
extrapolation. In Proceedings of the 30th International Conference on Machine Learning, 2013.
[25] Andrew Gordon Wilson and Hannes Nickisch. Kernel interpolation for scalable structured Gaussian
processes (KISS-GP). In Proceedings of the 32nd International Conference on Machine Learning, 2015.
9
| 6475 |@word middle:3 norm:1 vi1:1 nd:5 scalably:1 covariance:13 decomposition:1 recursively:1 carry:1 reduction:1 liu:1 series:55 uma:1 contains:2 daniel:1 ours:1 interestingly:1 rightmost:1 outperforms:2 existing:1 ka:4 com:1 si:5 diederik:1 written:1 must:1 numerical:3 partition:1 wx:5 kdd:2 enables:2 drop:2 plot:6 update:1 stationary:2 intelligence:1 de1:2 beginning:1 short:1 provides:1 mathematical:1 symposium:1 fullyconnected:1 inside:1 expected:11 roughly:2 nor:1 mechanic:1 multi:3 inspired:1 actual:1 becomes:2 spain:1 estimating:2 underlying:3 notation:2 moreover:2 argmin:2 superlinearly:1 minimizes:1 z:2 astronomy:1 marlin:5 transformation:2 temporal:3 rck:1 every:1 multidimensional:1 ti:7 xd:1 exactly:5 um:1 classifier:21 prohibitively:1 grant:1 yn:1 positive:1 engineering:1 local:1 encoding:1 interpolation:15 black:6 studied:1 challenging:4 co:2 bi:3 drj:1 lecun:1 practice:4 block:7 recursive:1 backpropagation:6 procedure:2 area:1 jhu:1 projection:2 ups:2 prescott:1 get:1 cannot:2 onto:1 put:1 applying:1 optimize:1 equivalent:1 deterministic:1 missing:1 maximizing:4 yt:1 kale:1 resolution:2 ke:2 immediately:1 d1:5 importantly:1 orthonormal:1 reparameterization:1 population:1 variation:2 exact:10 gps:1 us:2 carl:1 trick:1 expensive:1 approximated:5 updating:1 recognition:3 sparsely:1 xian:2 steven:2 observed:4 bottom:1 connected:1 substantial:3 benjamin:2 mentioned:1 complexity:4 nesterov:1 trained:5 solving:1 algebra:1 completely:1 basis:3 logreg:2 unequally:1 easily:1 various:5 represented:1 derivation:1 train:8 fast:4 describe:2 sven:1 monte:2 artificial:1 lag:1 larger:1 solve:3 cvpr:1 drawing:4 triangular:1 toeplitz:3 encoder:1 unseen:2 gp:60 transform:2 itself:1 jointly:3 sequence:3 rr:2 advantage:1 eigenvalue:4 differentiable:2 propose:3 product:4 combining:3 pthe:1 achieve:3 inducing:13 exploiting:1 convergence:1 double:1 sutskever:1 assessing:1 extending:1 generating:1 adam:1 converges:3 object:1 derive:1 andrew:2 pose:1 fixing:1 ij:1 edward:1 implemented:1 c:1 involves:1 implies:1 come:1 direction:1 stochastic:4 fix:1 biological:2 ryan:1 extension:1 exploring:1 hold:1 considered:2 exp:3 mapping:1 predict:2 bj:2 claim:1 early:1 purpose:1 applicable:1 label:6 augustin:1 extremal:1 successfully:1 gaussian:26 always:2 modified:1 zhong:1 mobile:1 wilson:6 pervasive:1 derived:2 focus:3 ax:2 validated:1 nd2:3 likelihood:6 baseline:2 stopping:1 entire:3 integrated:1 chow:1 hidden:1 geosciences:1 bartels:2 transformed:1 schulz:1 overall:2 classification:29 aforementioned:1 among:1 art:1 special:1 spatial:1 marginal:10 field:1 aware:6 once:3 sampling:15 biology:1 unsupervised:1 imp:7 future:2 gordon:2 richard:2 few:1 randomly:3 simultaneously:3 densely:1 national:1 ima:1 consisting:1 jeffrey:1 ecology:2 mlp:2 simpson:1 golub:2 mixture:4 extreme:1 analyzed:1 tj:6 xb:2 kt:6 accurate:2 bicubic:1 fu:1 capable:1 beresford:1 necessary:1 minw:1 orthogonal:1 incomplete:1 initialized:1 plotted:1 instance:2 column:2 earlier:2 wb:1 facet:1 lanczos:37 stewart:2 entry:2 uniform:3 tridiagonal:3 stored:2 connect:2 varies:1 nickisch:5 combined:1 synthetic:1 st:1 peri:1 lstm:1 amherst:2 siam:4 international:4 probabilistic:1 informatics:1 together:2 ilya:1 again:1 squared:2 choose:1 huang:1 return:1 li:3 account:1 accelerometer:1 satisfy:1 explicitly:2 vi:2 astrophysical:1 later:1 root:6 h1:8 extrapolation:1 bayes:1 complicated:2 ass:1 square:6 accuracy:3 convolutional:3 variance:2 efficiently:10 spaced:6 identify:1 correspond:1 yield:1 raw:1 carlo:2 lighting:1 naturally:2 associated:3 sampled:19 massachusetts:1 astronomical:1 improves:1 dimensionality:1 originally:1 higher:1 follow:1 hannes:1 evaluated:1 box:6 furthermore:1 stage:3 lastly:1 hand:3 working:1 nonlinear:1 propagation:1 logistic:3 quality:5 scientific:1 grows:1 name:1 contain:1 normalized:1 vasudevan:1 analytically:2 hence:1 symmetric:6 deal:1 climate:1 gw:1 during:4 uniquely:1 backpropagating:1 noted:1 rhythm:2 criterion:1 leftmost:1 demonstrate:1 performs:1 percent:1 image:1 variational:1 recently:4 charles:1 common:5 volume:2 refer:4 similarly:2 language:1 dj:6 longer:1 ezi:1 base:2 posterior:23 multivariate:2 showed:2 closest:1 optimizing:1 discard:1 scenario:1 certain:1 yi:2 preserving:1 additional:1 care:1 signal:1 ii:1 full:1 desirable:1 multiple:3 infer:1 d0:1 smooth:1 characterized:1 plug:1 gesture:3 lin:1 e1:4 plugging:1 prediction:3 scalable:4 regression:11 sylvester:2 vision:2 expectation:6 essentially:2 iteration:9 kernel:37 represent:2 irregular:9 justified:1 addition:2 whereas:1 separately:1 interval:4 unevenly:2 saad:3 probably:1 pass:1 induced:1 regularly:1 schur:1 call:1 presence:2 feedforward:2 split:2 variety:1 adapter:30 zi:3 architecture:1 reduce:1 idea:3 simplifies:2 ti1:1 computable:1 bottleneck:1 speech:1 repeatedly:1 deep:1 jie:1 generally:1 useful:1 detailed:1 transforms:2 locally:1 induces:1 generate:2 http:1 outperform:1 express:1 key:4 four:1 imputation:1 d3:3 neither:1 sighit:1 backward:2 asymptotically:1 run:1 parameterized:2 uncertainty:17 hammarling:1 throughout:1 electronic:2 yann:1 separation:1 draw:4 appendix:3 scaling:1 layer:3 bound:1 followed:1 cheng:2 constraint:1 bp:5 n3:6 software:1 personalized:1 dominated:2 u1:1 speed:4 fourier:2 reparameterize:2 span:1 simulate:1 leon:1 aspect:1 relatively:1 structured:5 according:2 combination:1 conjugate:7 belonging:1 across:2 wi:8 making:1 s1:1 quoc:1 randall:1 taken:1 equation:4 needed:1 letting:1 irregularly:17 fed:1 end:18 available:1 operation:1 apply:4 eight:1 edmond:1 spectral:4 generic:1 batch:2 schmidt:1 rp:1 denotes:2 running:1 top:1 underwood:1 clustering:1 log2:1 marginalized:1 medicine:1 scargle:2 build:1 approximating:7 classical:1 feng:1 objective:1 gradient:26 iclr:1 subspace:7 convnet:2 venu:1 evenly:2 preconditioned:1 meg:17 length:4 besides:2 index:1 relationship:1 mini:1 kk:4 minimizing:1 difficult:1 robert:1 yousef:3 ski:11 unknown:1 perform:1 allowing:1 observation:8 convolution:3 howard:1 descent:4 communication:1 variability:1 y1:1 introduced:1 david:1 required:3 optimized:1 acoustic:1 learned:1 barcelona:1 kingma:1 nip:1 address:3 able:1 suggested:1 krylov:6 below:2 usually:1 pattern:4 sparsity:2 challenge:2 including:6 max:1 natural:1 turner:1 carried:1 autoencoder:1 auto:1 health:2 sn:1 prior:1 acknowledgement:1 discovery:2 multiplication:1 relative:1 embedded:1 fully:2 loss:12 discriminatively:3 versus:3 fixeddimensional:2 clark:1 validation:2 foundation:1 digital:1 sufficient:2 propagates:1 dd:1 classifying:4 bypass:1 row:2 supported:1 last:1 rasmussen:1 side:1 deeper:1 circulant:1 taking:3 sparse:20 distributed:2 benefit:1 overcome:1 dimension:1 curve:1 gram:1 van:1 parlett:1 kdk:1 commonly:1 collection:2 forward:4 simplified:2 welling:1 transaction:2 sj:1 approximate:10 ignore:1 gene:2 investigating:1 assumed:1 tuples:1 discriminative:1 xi:1 alternatively:1 spectrum:1 iterative:2 cxl:1 table:2 promising:1 learn:2 ku:11 zk:1 bottou:1 necessarily:1 main:1 noise:2 hyperparameters:2 n2:4 categorized:1 x1:1 referred:1 cubic:7 position:1 momentum:1 exponential:2 periodogram:1 ian:1 learnable:3 list:2 dk:2 concern:1 magnitude:1 kx:6 gap:1 simply:1 wetzel:1 ez:10 vinyals:1 kiss:1 restarted:1 corresponds:2 satisfies:2 relies:1 acm:2 ma:1 identity:1 viewed:1 shared:1 owen:1 loan:1 wt:14 pas:2 ili:2 invariance:1 college:1 cholesky:2 brevity:1 preparation:1 oriol:1 evaluate:2 trainable:1 |
6,053 | 6,476 | Inference by Reparameterization in Neural
Population Codes
Rajkumar V. Raju
Department of ECE
Rice University
Houston, TX 77005
rv12@rice.edu
Xaq Pitkow
Dept. of Neuroscience, Dept. of ECE
Baylor College of Medicine, Rice University
Houston, TX 77005
xaq@rice.edu
Abstract
Behavioral experiments on humans and animals suggest that the brain performs
probabilistic inference to interpret its environment. Here we present a new generalpurpose, biologically-plausible neural implementation of approximate inference.
The neural network represents uncertainty using Probabilistic Population Codes
(PPCs), which are distributed neural representations that naturally encode probability distributions, and support marginalization and evidence integration in a
biologically-plausible manner. By connecting multiple PPCs together as a probabilistic graphical model, we represent multivariate probability distributions. Approximate inference in graphical models can be accomplished by message-passing
algorithms that disseminate local information throughout the graph. An attractive
and often accurate example of such an algorithm is Loopy Belief Propagation
(LBP), which uses local marginalization and evidence integration operations to
perform approximate inference efficiently even for complex models. Unfortunately,
a subtle feature of LBP renders it neurally implausible. However, LBP can be
elegantly reformulated as a sequence of Tree-based Reparameterizations (TRP)
of the graphical model. We re-express the TRP updates as a nonlinear dynamical
system with both fast and slow timescales, and show that this produces a neurally
plausible solution. By combining all of these ideas, we show that a network of
PPCs can represent multivariate probability distributions and implement the TRP
updates to perform probabilistic inference. Simulations with Gaussian graphical
models demonstrate that the neural network inference quality is comparable to
the direct evaluation of LBP and robust to noise, and thus provides a promising
mechanism for general probabilistic inference in the population codes of the brain.
1
Introduction
In everyday life we constantly face tasks we must perform in the presence of sensory uncertainty. A
natural and efficient strategy is then to use probabilistic computation. Behavioral experiments have
established that humans and animals do in fact use probabilistic rules in sensory, motor and cognitive
domains [1, 2, 3]. However, the implementation of such computations at the level of neural circuits is
not well understood.
In this work, we ask how distributed neural computations can consolidate incoming sensory information and reformat it so it is accessible for many tasks. More precisely, how can the brain
simultaneously infer marginal probabilities in a probabilistic model of the world? Previous efforts
to model marginalization in neural networks using distributed codes invoked limiting assumptions,
either treating only a small number of variables [4], allowing only binary variables [5, 6, 7], or
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
restricting interactions [8, 9]. Real-life tasks are more complicated and involve a large number of
variables that need to be marginalized out, requiring a more general inference architecture.
Here we present a distributed, nonlinear, recurrent network of neurons that performs inference about
many interacting variables. There are two crucial parts to this model: the representation and the
inference algorithm. We assume that brains represent probabilities over individual variables using
Probabilistic Population Codes (PPCs) [10], which were derived by using Bayes? Rule on experimentally measured neural responses to sensory stimuli. Here for the first time we link multiple PPCs
together to construct a large-scale graphical model. For the inference algorithm, many researchers
have considered Loopy Belief Propagation (LBP) to be a simple and efficient candidate algorithm
for the brain [11, 12, 13, 14, 8, 5, 7, 6]. However, we will discuss one particular feature of LBP that
makes it neurally implausible. Instead, we propose that an alternative formulation of LBP known as
Tree-based Reparameterization (TRP) [15], with some modifications for continuous-time operation
at two timescales, is well-suited for neural implementation in population codes.
We describe this network mathematically below, but the main conceptual ideas are fairly straightforward: multiplexed patterns of activity encode statistical information about subsets of variables, and
neural interactions disseminate these statistics to all other relevant encoded variables.
In Section 2 we review key properties of our model of how neurons can represent probabilistic
information through PPCs. Section 3 reviews graphical models, Loopy Belief Propagation and
Tree-based Reparameterization. In Section 4, we merge these ingredients to model how populations
of neurons can represent and perform inference on large multivariate distributions. Section 5 describes
experiments to test the performance of network. We summarize and discuss our results in Section 6.
2
Probabilistic Population Codes
Neural responses r vary from trial to trial, even to repeated presentations of the same stimulus x.
This variability can be expressed as the likelihood function p(r|x). Experimental data from several
brain areas responding to simple stimuli suggests that this variability often belongs to the exponential
family of distributions with linear sufficient statistics [10, 16, 17, 4, 18]:
p(r|x) = (r) exp(h(x) ? r),
(1)
where h(x) depends on the stimulus-dependent mean and fluctuations of the neuronal response and
(r) is independent of the stimulus. For a conjugate prior p(x), the posterior distribution will also
have this general form, p(x|r) / exp(h(x) ? r). This neural code is known as a linear PPC: it is
a Probabilistic Population Code because the population activity collectively encodes the stimulus
probability, and it is linear because the log-likelihood is linear in r. In this paper, we assume responses
are drawn from this family, although incorporation of more general PPCs with nonlinear sufficient
statistics T(r) is possible: p(r|x) / exp(h(x) ? T(r)).
An important property of linear PPCs, central to this work, is that different projections of the
population activity encode the natural parameters of the underlying posterior distribution. For
1 2
example, if the posterior distribution is Gaussian (Figure 1), then p(x|r) / exp
2 x a ? r + xb ? r ,
with a ? r and b ? r encoding the linear and quadratic natural parameters of the posterior. These
1
2
projections are related to the expectation parameters, the mean and variance, by ? = b?r
= a?r
.
a?r and
A second important property of linear PPCs is that the variance of the encoded distribution is inversely
proportional to the overall amplitude of the neural activity. Intuitively, this means that more spikes
means more certainty (Figure 1).
The most fundamental probabilistic operations are the product rule and the sum rule. Linear PPCs
can perform both of these operations while maintaining a consistent representation [4], a useful
feature for constructing a model of canonical computation. For a log-linear probability code like
linear PPCs, the product rule corresponds to weighted summation of neural activities: p(x|r1 , r2 ) /
p(x|r1 )p(x|r2 ) () r3 = A1 r1 + A2 r2 . In contrast, to use the sum rule to marginalize out variables,
linear PPCs require nonlinear transformations of population activity. Specifically, a quadratic
nonlinearity with divisive normalization performs near-optimal marginalization in linear PPCs [4].
Quadratic interactions arise naturally through coincidence detection, and divisive normalization is a
nonlinear inhibitory effect widely observed in neural circuits [19, 20, 21]. Alternatively, near-optimal
marginalizations on PPCs can also be performed by more general nonlinear transformations [22]. In
sum, PPCs provide a biologically compatible representation of probabilistic information.
2
A
B
Neural
response
ri
Posterior
p(x|r)
Neuron
index i
a.r
b.r
? = 1.
ar
.
? = b. r
ar
x
Figure 1: Key properties of linear PPCs. (A) Two single trial population responses for a particular
stimulus, with low and high amplitudes (blue and red). The two projections a ? r and b ? r encode the
natural parameters of the posterior. (B) Corresponding posteriors over stimulus variables determined
by the responses in panel A. The gain or overall amplitude of the population code is inversely
proportional to the variance of the posterior distribution.
3
3.1
Inference by Tree-based Reparameterization
Graphical Models
To generalize PPCs, we need to represent the joint probability distribution of many variables. A
natural way to represent multivariate distributions is with probabilistic graphical models. In this work,
we use the formalism of factor graphs, a type of bipartite graph in which nodes representing variables
are connected to other nodes called factors representing interactions between ?cliques? or sets of
variables (Figure 2A). The
Q joint probability over all variables can then be represented as a product
over cliques, p(x) = Z1 c2C c (xc ), where c (xc ) are nonnegative compatibility functions on the
set of variables xc = {xc |c 2 C} in the clique, and Z is a normalization constant. The distribution of
interest will be a posterior distribution p(x|r) that depends on neural responses r. Since the inference
algorithm we present is unchanged with this conditioning, for notational convenience we suppress
this dependence on r.
In this paper, we focus on pairwise interactions, although our main framework generalizes naturally
to richer, higher-order interactions. In a pairwise model, we allow singleton factors s for variable
nodes s in a set of vertices V , and pairwise interaction factors st for pairs
Q (s, t) inQthe set of edges
E that connect those vertices. The joint distribution is then p(x) = Z1 V s (xs ) E st (xs , xt ).
3.2
Belief Propagation and its neural plausibility
The inference problem
of interest in this work is to compute the marginal distribution for each
R
variable, ps (xs ) = x\xs p(x) d(x\xs ). This task is generally intractable. However, the factorization
structure of the distribution can be used to perform inference efficiently, either exactly in the case of
tree graphs, or approximately for graphs with cycles. One such inference algorithm is called Belief
Propagation (BP) [11]. BP iteratively passes information along the graph in the form of messages
mst (xt ) from node s to t, using only local computations that summarize the relevant aspects of other
messages upstream in the graph:
Z
Y
Y
n+1
mst (xt ) =
dxs s (xs ) st (xs , xt )
mnus (xs )
bs (xs ) / s
mus (xs )
(2)
xs
u2N (s)\t
u2N (s)
where n is the time or iteration number, and N (s) is the set of neighbors of node s on the graph. The
estimated marginal, called the ?belief? bs (xs ) at a node s, is proportional to the local evidence at
that node s (xs ) and all the messages coming into node s. Similarly, the messages themselves are
determined self-consistently by combining incoming messages ? except for the previous message
from the target node t.
This message exclusion is critical because it prevents evidence previously passed by the target node
from being counted as if it were new evidence. This exclusion only prevents overcounting on a tree
graph, and is unable to prevent overcounting of evidence passed around loops. For this reason, BP is
exact for trees, but only approximate for general, loopy graphs. If we use this algorithm anyway, it is
called ?Loopy? Belief Propagation (LBP), and it often has quite good performance [12].
3
Multiple researchers have been intrigued by the possibility that the brain may perform LBP [13,
14, 5, 8, 7, 6], since it gives ?a principled framework for propagating, in parallel, information and
uncertainty between nodes in a network? [12]. Despite the conceptual appeal of LBP, it is important
to get certain details correct: in an inference algorithm described by nonlinear dynamics, deviations
from ideal behavior could in principle lead to very different outcomes.
One critically important detail is that each node must send different messages to different targets to
prevent overcounting. This exclusion can render LBP neurally implausible, because neurons cannot
readily send different output signals to many different target neurons. Some past work simply ignores
the problem [5, 7]; the resultant overcounting destroys much of the inferential power of LBP, often
performing worse than more na?ve algorithms like mean-field inference. One better option is to
use different readouts of population activity for different targets [6], but this approach is inefficient
because it requires many readout populations for messages that differ only slightly, and requires
separate optimization for each possible target. Other efforts have avoided the problem entirely by
performing only unidirectional inference of low-dimensional variables that evolve over time [14].
Appealingly, one can circumvent all of these difficulties by using an alternative formulation of LBP
known as Tree-based Reparameterization (TRP).
3.3
Tree-based Reparameterization
Insightful work by Wainwright, Jakkola, and Willsky [15] revealed that belief propagation can
be understood as a convenient way of refactorizing a joint probability distribution, according to
approximations of local marginal probabilities. For pairwise interactions, this can be written as
Y
Y
Y Tst (xs , xt )
1 Y
p(x) =
Ts (xs )
(3)
s (xs )
st (xs , xt ) =
Z
Ts (xs )Tt (xt )
s2V
s2V
(s,t)2E
(s,t)2E
where Ts (xs ) is a so-called ?pseudomarginal? distribution of xs and Tst (xs , xt ) is a joint pseudomarginal over xs and xt (Figure 2A?B), where Ts and Tst are the outcome of Loopy Belief
Propagation. The name pseudomarginal comes from the fact that these quantities are always locally
consistent with being marginal distributions, but they are only globally consistent with the true
marginals when the graphical model is tree-structured.
These pseudomarginals can be constructed iteratively as the true marginals of a different joint
distribution p? (x) on an isolated tree-structured subgraph ? . Compatibility functions from factors
remaining outside of the subgraph are collected in a residual term r? (x). This regrouping leaves the
joint distribution unchanged: p(x) = p? (x)r? (x).
The factors of p? are then rearranged by computing the true marginals on its subgraph ? , again
preserving the joint distribution. In subsequent updates, we iteratively refactorize using the marginals
of p? along different tree subgraphs ? (Figure 2C).
A
B
x1
x2
Original
x3
C
x1
x2
p(x)=pi(x)r i(x)
p(x)=p j(x)r j(x)
x3
Iteration i
Tree reparameterized
Iteration j
Figure 2: Visualization of tree reparameterization. (A) A probability distribution is specified by
factors { s , st } on a tree graph. (B) An alternative parameterization of the same distribution in
terms of the marginals {Ts , Tst }. (C) Two TRP updates for a 3 ? 3 nearest-neighbor grid of variables.
Typical LBP can be interpreted as a sequence of local reparameterizations over just two neighboring
nodes and their corresponding edge [15]. Pseudomarginals are initialized at time n = 0 using the
0
original factors: Ts0 (xs ) / s (xs ) and Tst
(xs , xt ) / s (xs ) t (xt ) st (xs , xt ). At iteration n + 1,
the node and edge pseudomarginals are computed by exactly marginalizing the distribution built from
previous pseudomarginals at iteration n:
n
Y 1 Z
Tst
n+1
n+1
n
n
R
R n
Ts
/ Ts
T
dx
T
/
T n+1 Ttn+1
(4)
u
st
su
n dx
Tsn
Tst
Tst dxs s
t
u2N (s)
Q
Notice that, unlike the original form of LBP, operations on graph neighborhoods u2N (s) do not
differentiate between targets.
4
4
4.1
Neural implementation of TRP updates
Updating natural parameters
TRP?s operation only requires updating pseudomarginals, in place, using local information. These are
appealing properties for a candidate brain algorithm. This representation is also nicely compatible
with the structure of PPCs: different projections of the neural activity encode the natural parameters
of an exponential family distribution. It is thus useful to express the pseudomarginals and the TRP
inference algorithm using vectors of sufficient statistics c (xc ) and natural parameters ? nc for each
clique: Tcn (xc ) = exp (? nc ? c (xc )). For a model with at most pairwise interactions, the TRP
updates (4) can be expressed in terms of these natural parameters as
X
? n+1
= (1 ds )? ns +
gV (? nsu )
? n+1
= ? nst + Qs ? n+1
+ Qt ? n+1
+ gE (? nst ) (5)
s
st
s
t
u2N (s)
where ds is the number of neighbors of node s, the matrices Qs , Qt embed the node parameters into
the space of the pairwise parameters, and gV and gE are nonlinear functions (for vertices V and
edges E) that are determined by the particular graphical model. Since the natural parameters reflect
log-probabilities, the product rule for probabilities becomes a linear sum in ?, while the sum rule for
probabilities must be implemented by nonlinear operations g on ?.
In the concrete case of a Gaussian graphical model, the joint distribution is given by p(x) /
exp ( 12 x>Ax + b>x), where A and b are the natural parameters, and the linear and quadratic
functions x and xx> are the sufficient statistics. When we reparameterize this distribution by
pseudomarginals, we again have linear and quadratic sufficient statistics: two for each node, s =
( 12 x2s , xs )> , and five for each edge, st = ( 12 x2s , xs xt , 12 x2t , xs , xt )> . Each of these vectors
of sufficient statistics has its own vector of natural parameters, ? s and ? st .
To approximate the marginal probabilities, the TRP algorithm initializes the pseudomarginals to
>
>
? 0s = (Ass , bs ) and ? 0st = (Ass , Ast , Att , bs , bt ) . To update ?, we must specify the nonlinear functions g that recover the univariate marginal distribution of a bivariate gaussian Tst . For
1
2
Tst (xs , xt ) / exp
?2;st xs xt 12 ?3;st x2t + ?4;st xs + ?5;st xt , this marginal is
2 ?1;st xs
!
Z
2
?1;st ?3;st ?2;st
?4;st ?3;st ?2;st ?5;st
x2s
Ts (xs ) = dxt Tst (xs , xt ) / exp
+
xs
(6)
?3;st
2
?3;st
Using this, we can now specify the embedding matrices and the nonlinear functions in the TRP
?
?>
?
?>
1 0 0 0 0
0 0 1 0 0
updates (5): Qs =
and Qt =
0 0 0 1 0
0 0 0 0 1
2
gV (? nsu )
=
? n1;su
? n2;su
, ? n4;su
? n3;su
2
gE (? nst )
=
? n1;st
? n2;su ? n5;su
? n3;su
? n2;st
, 0, ? n3;st
? n3;st
!>
2
? n2;st
, ? n4;st
? n1;st
(7)
? n2;st ? n5;st n
, ? 5;st
? n3;st
? n2;st ? n4;st
? n1;st
!>
where ? i;st is the ith elements of ? st . Notice that these nonlinearities are all quadratic functions with
a linear divisive normalization.
4.2
Separation of Time Scales for TRP Updates
An important feature of the TRP updates is that they circumvent the ?message exclusion? problem
of LBP. The TRP update for the singleton terms, (4) and (5), includes contributions from all the
neighbors of a given node. There is no free lunch, however, and the price is that the updates at time
n + 1 depend on previous pseudomarginals at two different times, n and n + 1. The latter update is
therefore instantaneous information transmission, which is not biologically feasible.
To overcome this limitation, we observe that the brain can use fast and slow timescales ?fast ? ?slow
instead of instant and delayed signals. The fast timescale would most naturally correspond to the
5
membrane time constant of the neurons, whereas the slow timescale would emerge from network
interactions. We convert the update equations to continuous time, and introduce auxiliary variables
? which are lowpass-filtered versions of ? on a slow timescale: ?slow ??? = ?
? + ?. The nonlinear
?
dynamics of (5) are then updated on a faster timescale ?fast according to
X
?s +
? su )
? st )
?fast ?? s = ds ?
g (?
?fast ?? st = Qs ? s + Qt ? t + g (?
(8)
V
E
u2N (s)
? By concatenating these
where the nonlinear terms g depend only on the slower, delayed activity ?.
?
two sets of parameters, ? = (?, ?), we obtain a coupled multidimensional dynamical system which
represents the approximation to the TRP iterations:
? = W ? + G(?)
?
(9)
Here the weight matrix W and the nonlinear function G inherit their structure from the discrete-time
updates and the lowpass filtering at the fast and slow timescales.
4.3
Network Architecture
To complete our neural inference network, we now embed the nonlinear dynamics (9) into the
population activity r. Since different projections of the neural activity in a linear PPC encode natural
parameters of the underlying distribution, we map neural activity onto ? by r = U ?, where U is
a rectangular Nr ? N? embedding matrix that projects the natural parameters and their low-pass
versions into the neural response space. These parameters can be decoded from the neural activity as
? = U + r, where U + is the pseudoinverse of U .
? = U (W ? + G(?)) = U W U + r +
Applying this basis transformation to (9), we have r? = U ?
+
U G(U r). We then obtain the general form of the updates for the neural activity
r? = WL r + GN L (r)
(10)
+
+
where WL r = U W U r and GN L (r) = U G(U r) correspond to the linear and nonlinear computational components that integrate and marginalize evidence, respectively. The nonlinear function on r
inherits the structure needed for the natural parameters, such as a quadratic polynomial with a divisive
normalization used in low-dimensional Gaussian marginalization problems [4], but now expanded to
high-dimensional graphical models. Figure 3 depicts the network architecture for the simple graphical
model from Figure 2A, both when there are distinct neural subpopulations for each factor (Figure 3A),
and when the variables are fully multiplexed across the entire neural population (Figure 3B). These
simple, biologically-plausible neural dynamics (10) represent a powerful, nonlinear, fully-recurrent
network of PPCs which implements the TRP update equations on an underlying graphical model.
A
r1
singleton r2
populations
pairwise
populations
r3
B
nonlinear
connections
singleton
projections
linear
connections
r12
pairwise
projections
linear
connections
nonlinear
connections
r23
Figure 3: Distributed, nonlinear, recurrent network of neurons that performs probabilistic inference
on a graphical model. (A) This simple case uses distinct subpopulations of neurons to represent
different factors in the example model in Figure 2A. (B) A cartoon shows how the same distribution
can be represented as distinct projections of the distributed neural activity, instead of as distinct
populations. In both cases, since the neural activities encode log-probabilities, linear connections are
responsible for integrating evidence while nonlinear connections perform marginalization.
5
Experiments
We evaluate the performance of our neural network on a set of small Gaussian graphical models
with up to 400 interacting variables. The networks time constants were set to have a ratio of
6
?slow /?fast = 20. Figure 4A shows the neural population dynamics as the network performs inference,
along with the temporal evolution of the corresponding node and pairwise means and covariances.
The neural activity exhibits a complicated timecourse, and reflects a combination of many natural
parameters changing simultaneously during inference. This type of behavior is seen in neural activity
recorded from behaving animals [23, 24, 25]. Figure 4B shows how the performance of the network
improves with the ratio of time-scales, , ?slow /?fast . The performance is quantified by the mean
squared error in the inferred parameters for a given divided by the error for a reference 0 = 10.
max
Covariances
Neural activity r
Inferred expectation
parameters
Means
min
Time
Time
Time
Figure 4: Dynamics of neural population activity (A) and the expectation parameters of the posterior
distribution that the population encodes (B) for one trial of the tree model in Figure 2A. (C) Multiple
simulations show that relative error decreases as a function of the ratio of fast and slow timescales .
Figure 5 shows that our recurrent neural network accurately infers the marginal probabilities, and
reaches almost the same conclusions as loopy belief propagation. The data points are obtained from
multiple simulations with different graph topologies, including graphs with many loops. Figure 6
verifies that the network is robust to noise even when there are few neurons per inferred parameter;
adding more neurons improves performance since the noise can be averaged away.
Figure 5: Inference performance of our neural network (blue) and standard loopy belief propagation
(red) for a variety of graph topologies: chains, single loops, square grids up to 20 ? 20 and densely
connected graphs with up to 25 variables. The expectation parameters (means, covariances) of the
pseudomarginals closely match the corresponding parameters for the true marginals.
A
B
min
Inferred parameters
Neural activity r
max
Nneurons
=1
Nparams
Nneurons
=5
Nparams
no noise
Mean
Variance
True parameters
Time
Figure 6: Network performance is robust to noise, and improves with more neurons. (A) Neural
activity performing inference on a 5 ? 5 square grid, in the presence of independent spatiotemporal
Gaussian noise of standard deviation 0.1 times the standard deviation of each signal. (B) Expectation
parameters (means, variances) of the node pseudomarginals closely match the corresponding parameters for the true marginals, despite the noise. Results are shown for one or five neurons per parameter
in the graphical model, and for no noise (i.e. infinitely many neurons).
7
6
Conclusion
We have shown how a biologically-plausible nonlinear recurrent network of neurons can represent a multivariate probability distribution using population codes, and can perform inference by
reparameterizing the joint distribution to obtain approximate marginal probabilities.
Our network model has desirable properties beyond those lauded features of belief propagation. First,
it allows for a thoroughly distributed population code, with many neurons encoding each variable and
many variables encoded by each neuron. This is consistent with neural recordings in which many
task-relevant features are multiplexed across a neural population [23, 24, 25], as well as with models
where information is embedded in a higher-dimensional state space [26, 27].
Second, the network performs inference in place, without using a distinct neural representation for
messages, and avoids the biological implausibility associated with sending different messages about
every variable to different targets. This virtue comes from exchanging multiple messages for multiple
timescales. It is noteworthy that allowing two timescales prevents overcounting of evidence on loops
of length two (target to source to target). This suggests a novel role of memory in static inference
problems: a longer memory could be used to discount past information sent at more distant times,
thus avoiding the overcounting of evidence that arises from loops of length three and greater. It may
therefore be possible to develop reparameterization algorithms with all the convenient properties of
LBP but with improved performance on loopy graphs.
Previous results show that the quadratic nonlinearity with divisive normalization is convenient and
biologically plausible, but this precise form is not necessary: other pointwise neuronal nonlinearities
can also produce high-quality marginalizations in PPCs [22]. In a distributed code, the precise
nonlinear form at the neuronal scale is not important as long as the effect on the parameters is the
same.
More generally, however, different nonlinear computations on the parameters implement different
approximate inference algorithms. The distinct behaviors of such algorithms as variational inference
[28], generalized belief propagation, and others arise from differences in their nonlinear transformations. Even Gibbs sampling can be described as a noisy nonlinear message-passing algorithm.
Although LBP and its generalizations have strong appeal, we doubt the brain will use this algorithm
exactly. The real nonlinear functions in the brain may implement even smarter algorithms.
To identify the brain?s algorithm, it may be more revealing to measure how information is represented
and transformed in a low-dimensional latent space embedded in the high-dimensional neural responses
than to examine each neuronal nonlinearity in isolation. The present work is directed toward this
challenge of understanding computation in this latent space. It provides a concrete example showing
how distributed nonlinear computation can be distinct from localized neural computations. Learning
this computation from data will be a key challenge for neuroscience. In future work we aim to
recover the latent computations of our network from artificial neural recordings generated by the
model. Successful model recovery would encourage us to apply these methods to large-scale neural
recordings to uncover key properties of the brain?s distributed nonlinear computations.
Author contributions
XP conceived the study. RR and XP derived the equations. RR implemented the computer simulations.
RR and XP analyzed the results and wrote the paper.
Acknowledgments
XP and RR were supported in part by a grant from the McNair Foundation, NSF CAREER Award
IOS-1552868, and by the Intelligence Advanced Research Projects Activity (IARPA) via Department
of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003.1
1
The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of
the authors and should not be interpreted as necessarily representing the official policies or endorsements, either
expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
8
References
[1] Knill DC, Richards W (1996) Perception as Bayesian inference. Cambridge University Press.
[2] Doya K (2007) Bayesian brain: Probabilistic approaches to neural coding. MIT press.
[3] Pouget A, Beck JM, Ma WJ, Latham PE (2013) Probabilistic brains: knowns and unknowns. Nature
neuroscience 16: 1170?1178.
[4] Beck JM, Latham PE, Pouget A (2011) Marginalization in neural circuits with divisive normalization. The
Journal of neuroscience 31: 15310?15319.
[5] Ott T, Stoop R (2006) The neurodynamics of belief propagation on binary markov random fields. In:
Advances in neural information processing systems. pp. 1057?1064.
[6] Steimer A, Maass W, Douglas R (2009) Belief propagation in networks of spiking neurons. Neural
Computation 21: 2502?2523.
[7] Litvak S, Ullman S (2009) Cortical circuitry implementing graphical models. Neural computation 21:
3010?3056.
[8] George D, Hawkins J (2009) Towards a mathematical theory of cortical micro-circuits. PLoS Comput Biol
5: e1000532.
[9] Grabska-Barwinska A, Beck J, Pouget A, Latham P (2013) Demixing odors-fast inference in olfaction. In:
Advances in Neural Information Processing Systems. pp. 1968?1976.
[10] Ma WJ, Beck JM, Latham PE, Pouget A (2006) Bayesian inference with probabilistic population codes.
Nature neuroscience 9: 1432?1438.
[11] Pearl J (1988) Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan
Kaufmann.
[12] Yedidia JS, Freeman WT, Weiss Y (2003) Understanding belief propagation and its generalizations.
Exploring artificial intelligence in the new millennium 8: 236?239.
[13] Lee TS, Mumford D (2003) Hierarchical bayesian inference in the visual cortex. JOSA A 20: 1434?1448.
[14] Rao RP (2004) Hierarchical bayesian inference in networks of spiking neurons. In: Advances in neural
information processing systems. pp. 1113?1120.
[15] Wainwright MJ, Jaakkola TS, Willsky AS (2003) Tree-based reparameterization framework for analysis of
sum-product and related algorithms. Information Theory, IEEE Transactions on 49: 1120?1146.
[16] Jazayeri M, Movshon JA (2006) Optimal representation of sensory information by neural populations.
Nature neuroscience 9: 690?696.
[17] Beck JM, Ma WJ, Kiani R, Hanks T, Churchland AK, Roitman J, Shadlen MN, et al. (2008) Probabilistic
population codes for bayesian decision making. Neuron 60: 1142?1152.
[18] Graf AB, Kohn A, Jazayeri M, Movshon JA (2011) Decoding the activity of neuronal populations in
macaque primary visual cortex. Nature neuroscience 14: 239?245.
[19] Heeger DJ (1992) Normalization of cell responses in cat striate cortex. Visual neuroscience 9: 181?197.
[20] Carandini M, Heeger DJ (2012) Normalization as a canonical neural computation. Nature Reviews
Neuroscience 13: 51?62.
[21] Rubin DB, Van Hooser SD, Miller KD (2015) The stabilized supralinear network: A unifying circuit motif
underlying multi-input integration in sensory cortex. Neuron 85: 402?417.
[22] Vasudeva Raju R, Pitkow X (2015) Marginalization in random nonlinear neural networks. In: COSYNE.
[23] Hayden BY, Platt ML (2010) Neurons in anterior cingulate cortex multiplex information about reward and
action. The Journal of Neuroscience 30: 3339?3346.
[24] Rigotti M, Barak O, Warden MR, Wang XJ, Daw ND, Miller EK, Fusi S (2013) The importance of mixed
selectivity in complex cognitive tasks. Nature 497: 585?590.
[25] Raposo D, Kaufman MT, Churchland AK (2014) A category-free neural population supports evolving
demands during decision-making. Nature neuroscience 17: 1784?1792.
[26] Savin C, Deneve S (2014) Spatio-temporal representations of uncertainty in spiking neural networks. In:
Advances in Neural Information Processing Systems. pp. 2024?2032.
[27] Archer E, Park I, Buesing L, Cunningham J, Paninski L (2015) Black box variational inference for state
space models. arXiv stat.ML: 1511.07367.
[28] Beck J, Pouget A, Heller KA (2012) Complex inference in neural circuits with probabilistic population
codes and topic models. In: Advances in neural information processing systems. pp. 3059?3067.
9
| 6476 |@word trial:4 cingulate:1 version:2 polynomial:1 nd:1 simulation:4 covariance:3 ttn:1 att:1 past:2 ka:1 anterior:1 dx:2 must:4 readily:1 written:1 mst:2 distant:1 subsequent:1 pseudomarginals:11 motor:1 gv:3 treating:1 update:17 intelligence:2 leaf:1 parameterization:1 ith:1 filtered:1 provides:2 node:20 five:2 mathematical:1 along:3 constructed:1 direct:1 behavioral:2 introduce:1 manner:1 pairwise:9 behavior:3 themselves:1 examine:1 multi:1 brain:15 freeman:1 globally:1 jm:4 becomes:1 spain:1 xx:1 underlying:4 project:2 circuit:6 panel:1 appealingly:1 x2s:3 interpreted:2 grabska:1 kaufman:1 transformation:4 certainty:1 temporal:2 every:1 multidimensional:1 exactly:3 platt:1 grant:1 thereon:1 understood:2 local:7 multiplex:1 sd:1 io:1 despite:2 encoding:2 ak:2 fluctuation:1 merge:1 approximately:1 noteworthy:1 black:1 quantified:1 suggests:2 factorization:1 averaged:1 directed:1 acknowledgment:1 responsible:1 implement:4 x3:2 litvak:1 area:1 evolving:1 revealing:1 projection:8 inferential:1 convenient:3 integrating:1 ppcs:20 subpopulation:2 suggest:1 get:1 convenience:1 marginalize:2 cannot:1 onto:1 interior:2 ast:1 applying:1 map:1 center:1 send:2 straightforward:1 nneurons:2 overcounting:6 rectangular:1 recovery:1 pitkow:2 pouget:5 subgraphs:1 rule:8 q:4 reparameterization:9 population:32 embedding:2 anyway:1 limiting:1 updated:1 target:10 exact:1 us:2 element:1 rajkumar:1 updating:2 richards:1 observed:1 role:1 coincidence:1 wang:1 readout:2 wj:3 connected:2 cycle:1 plo:1 decrease:1 principled:1 environment:1 mu:1 reward:1 reparameterizations:2 dynamic:6 depend:2 churchland:2 bipartite:1 basis:1 joint:10 lowpass:2 represented:3 tx:2 cat:1 distinct:7 fast:12 describe:1 doi:2 artificial:2 ts0:1 outcome:2 outside:1 neighborhood:1 quite:1 encoded:3 widely:1 plausible:7 richer:1 statistic:7 timescale:4 noisy:1 differentiate:1 sequence:2 rr:4 propose:1 interaction:10 product:5 coming:1 neighboring:1 relevant:3 combining:2 loop:5 subgraph:3 everyday:1 p:1 r1:4 transmission:1 produce:2 recurrent:5 develop:1 stat:1 propagating:1 measured:1 nearest:1 qt:4 strong:1 implemented:2 auxiliary:1 come:2 differ:1 tcn:1 closely:2 correct:1 human:2 implementing:1 require:1 government:2 ja:2 generalization:2 biological:1 summation:1 mathematically:1 exploring:1 d16pc00003:1 ppc:2 considered:1 around:1 hawkins:1 exp:8 circuitry:1 vary:1 a2:1 purpose:1 wl:2 weighted:1 reflects:1 reparameterizing:1 mit:1 destroys:1 gaussian:7 always:1 aim:1 jaakkola:1 encode:7 derived:2 focus:1 ax:1 inherits:1 notational:1 consistently:1 likelihood:2 contrast:1 inference:41 dependent:1 motif:1 bt:1 entire:1 cunningham:1 transformed:1 reproduce:1 archer:1 compatibility:2 overall:2 animal:3 integration:3 fairly:1 marginal:10 field:2 construct:1 nicely:1 cartoon:1 sampling:1 represents:2 park:1 ibc:2 future:1 others:1 stimulus:8 intelligent:1 micro:1 few:1 simultaneously:2 ve:1 densely:1 individual:1 delayed:2 beck:6 n1:4 olfaction:1 ab:1 detection:1 interest:2 message:15 possibility:1 evaluation:1 analyzed:1 copyright:1 xb:1 chain:1 accurate:1 edge:5 encourage:1 necessary:1 tree:17 initialized:1 re:1 isolated:1 jazayeri:2 formalism:1 gn:2 rao:1 ar:2 steimer:1 exchanging:1 loopy:9 ott:1 deviation:3 vertex:3 subset:1 successful:1 connect:1 spatiotemporal:1 thoroughly:1 st:43 fundamental:1 u2n:6 accessible:1 probabilistic:22 contract:1 lee:1 decoding:1 connecting:1 together:2 concrete:2 na:1 squared:1 again:2 central:1 reflect:1 recorded:1 cosyne:1 worse:1 cognitive:2 ek:1 inefficient:1 ullman:1 doubt:1 distribute:1 nonlinearities:2 singleton:4 coding:1 includes:1 depends:2 performed:1 view:1 red:2 bayes:1 option:1 complicated:2 parallel:1 unidirectional:1 recover:2 annotation:1 contribution:2 disclaimer:1 square:2 variance:5 kaufmann:1 efficiently:2 miller:2 correspond:2 identify:1 generalize:1 buesing:1 bayesian:6 accurately:1 critically:1 researcher:2 implausible:3 reach:1 pp:5 naturally:4 resultant:1 associated:1 josa:1 static:1 gain:1 carandini:1 ask:1 improves:3 infers:1 subtle:1 amplitude:3 uncover:1 higher:2 response:11 specify:2 improved:1 wei:1 formulation:2 raposo:1 box:1 hank:1 just:1 r23:1 d:3 su:9 nonlinear:31 propagation:15 quality:2 name:1 effect:2 roitman:1 requiring:1 true:6 evolution:1 iteratively:3 maass:1 attractive:1 during:2 self:1 generalized:1 tt:1 demonstrate:1 complete:1 latham:4 performs:6 reasoning:1 variational:2 invoked:1 instantaneous:1 novel:1 spiking:3 mt:1 conditioning:1 interpret:1 marginals:7 nst:3 cambridge:1 gibbs:1 knowns:1 grid:3 similarly:1 dxs:2 nonlinearity:3 dj:2 longer:1 behaving:1 cortex:5 j:1 multivariate:5 posterior:10 exclusion:4 own:1 belongs:1 selectivity:1 certain:1 binary:2 regrouping:1 life:2 accomplished:1 preserving:1 seen:1 greater:1 houston:2 george:1 xaq:2 morgan:1 mr:1 signal:3 multiple:7 neurally:4 desirable:1 infer:1 barwinska:1 faster:1 match:2 plausibility:1 long:1 dept:2 divided:1 award:1 a1:1 n5:2 expectation:5 arxiv:1 iteration:6 represent:10 normalization:9 smarter:1 cell:1 lbp:18 whereas:1 pseudomarginal:3 source:1 crucial:1 unlike:1 warden:1 pass:1 recording:3 sent:1 db:1 near:2 presence:2 ideal:1 revealed:1 variety:1 marginalization:10 isolation:1 xj:1 architecture:3 topology:2 idea:2 stoop:1 c2c:1 kohn:1 passed:2 effort:2 movshon:2 render:2 reformulated:1 passing:2 action:1 useful:2 generally:2 involve:1 discount:1 locally:1 kiani:1 rearranged:1 category:1 canonical:2 inhibitory:1 notice:2 r12:1 nsf:1 governmental:1 neuroscience:11 estimated:1 per:2 conceived:1 stabilized:1 blue:2 discrete:1 express:2 key:4 drawn:1 tst:11 changing:1 prevent:2 douglas:1 deneve:1 graph:17 sum:6 convert:1 uncertainty:4 powerful:1 place:2 throughout:1 family:3 almost:1 doya:1 separation:1 fusi:1 endorsement:1 decision:2 consolidate:1 comparable:1 entirely:1 quadratic:8 nonnegative:1 activity:24 precisely:1 incorporation:1 bp:3 ri:1 x2:2 encodes:2 n3:5 aspect:1 reparameterize:1 min:2 savin:1 performing:3 expanded:1 s2v:2 department:2 structured:2 according:2 combination:1 conjugate:1 membrane:1 describes:1 slightly:1 across:2 kd:1 appealing:1 n4:3 biologically:7 modification:1 b:4 lunch:1 making:2 intuitively:1 equation:3 visualization:1 previously:1 discus:2 r3:2 mechanism:1 x2t:2 needed:1 ge:3 sending:1 generalizes:1 operation:7 yedidia:1 apply:1 observe:1 hierarchical:2 away:1 alternative:3 odor:1 slower:1 rp:1 original:3 responding:1 remaining:1 graphical:18 marginalized:1 maintaining:1 unifying:1 instant:1 xc:7 medicine:1 intrigued:1 unchanged:2 implied:1 initializes:1 quantity:1 spike:1 mumford:1 strategy:1 primary:1 dependence:1 striate:1 nr:1 exhibit:1 link:1 unable:1 separate:1 evaluate:1 topic:1 collected:1 reason:1 toward:1 willsky:2 code:17 length:2 index:1 pointwise:1 ratio:3 baylor:1 nc:2 unfortunately:1 suppress:1 implementation:4 policy:1 unknown:1 perform:9 allowing:2 neuron:22 markov:1 t:10 reparameterized:1 variability:2 precise:2 dc:1 interacting:2 inferred:4 pair:1 specified:1 z1:2 connection:6 timecourse:1 herein:1 established:1 barcelona:1 pearl:1 nip:1 macaque:1 daw:1 beyond:1 dynamical:2 below:1 pattern:1 perception:1 summarize:2 challenge:2 built:1 max:2 mcnair:1 including:1 belief:16 wainwright:2 power:1 critical:1 memory:2 natural:16 difficulty:1 circumvent:2 tsn:1 business:1 residual:1 advanced:1 mn:1 representing:3 millennium:1 inversely:2 reprint:1 coupled:1 review:3 prior:1 understanding:2 heller:1 evolve:1 marginalizing:1 relative:1 graf:1 embedded:2 fully:2 dxt:1 mixed:1 limitation:1 proportional:3 filtering:1 ingredient:1 localized:1 foundation:1 integrate:1 sufficient:6 consistent:4 xp:4 shadlen:1 principle:1 rubin:1 pi:1 compatible:2 supported:1 free:2 hayden:1 allow:1 barak:1 neighbor:4 face:1 emerge:1 distributed:10 van:1 overcome:1 cortical:2 world:1 avoids:1 sensory:6 ignores:1 author:2 avoided:1 counted:1 transaction:1 approximate:7 supralinear:1 wrote:1 clique:4 ml:2 pseudoinverse:1 incoming:2 conceptual:2 spatio:1 alternatively:1 continuous:2 latent:3 neurodynamics:1 promising:1 nature:7 mj:1 robust:3 career:1 as:2 generalpurpose:1 necessarily:1 complex:3 upstream:1 constructing:1 elegantly:1 domain:1 inherit:1 official:1 timescales:7 main:2 noise:8 arise:2 n2:6 iarpa:2 verifies:1 repeated:1 knill:1 x1:2 neuronal:5 depicts:1 slow:10 n:1 decoded:1 heeger:2 exponential:2 concatenating:1 candidate:2 comput:1 pe:3 embed:2 xt:18 showing:1 insightful:1 r2:4 x:37 appeal:2 virtue:1 evidence:10 bivariate:1 intractable:1 demixing:1 restricting:1 adding:1 importance:1 notwithstanding:1 demand:1 authorized:1 suited:1 simply:1 univariate:1 infinitely:1 paninski:1 visual:3 prevents:3 expressed:3 contained:1 trp:17 disseminate:2 collectively:1 corresponds:1 constantly:1 rice:4 ma:3 presentation:1 towards:1 price:1 feasible:1 experimentally:1 specifically:1 determined:3 except:1 typical:1 wt:1 called:5 pas:1 ece:2 experimental:1 divisive:6 college:1 support:2 latter:1 arises:1 avoiding:1 multiplexed:3 biol:1 |
6,054 | 6,477 | Understanding Probabilistic Sparse
Gaussian Process Approximations
Matthias Bauer??
Mark van der Wilk?
Carl Edward Rasmussen?
?
Department of Engineering, University of Cambridge, Cambridge, UK
?
Max Planck Institute for Intelligent Systems, T?ubingen, Germany
{msb55, mv310, cer54}@cam.ac.uk
Abstract
Good sparse approximations are essential for practical inference in Gaussian
Processes as the computational cost of exact methods is prohibitive for large
datasets. The Fully Independent Training Conditional (FITC) and the Variational
Free Energy (VFE) approximations are two recent popular methods. Despite
superficial similarities, these approximations have surprisingly different theoretical
properties and behave differently in practice. We thoroughly investigate the two
methods for regression both analytically and through illustrative examples, and
draw conclusions to guide practical application.
1
Introduction
Gaussian Processes (GPs) [1] are a flexible class of probabilistic models. Perhaps the most prominent
practical limitation of GPs is that the computational requirement of an exact implementation scales
as O(N 3 ) time, and as O(N 2 ) memory, where N is the number of training cases. Fortunately,
recent progress has been made in developing sparse approximations, which retain the favourable
properties of GPs but at a lower computational cost, typically O(N M 2 ) time and O(N M ) memory
for some chosen M < N . All sparse approximations rely on focussing inference on a small number
of quantities, which represent approximately the entire posterior over functions. These quantities
can be chosen differently, e.g., function values at certain input locations, properties of the spectral
representations [2], or more abstract representations [3]. Similar ideas are used in random feature
expansions [4, 5].
Here we focus on methods that represent the approximate posterior using the function value at a set of
M inducing inputs (also known as pseudo-inputs). These methods include the Deterministic Training
Conditional (DTC) [6] and the Fully Independent Training Conditional (FITC) [7], see [8] for a
review, as well as the Variational Free Energy (VFE) approximation [9]. The methods differ both in
terms of the theoretical approach in deriving the approximation, and in terms of how the inducing
inputs are handled. Broadly speaking, inducing inputs can either be chosen from the training set
(e.g. at random) or be optimised over. In this paper we consider the latter, as this will generally allow
for the best trade-off between accuracy and computational requirements. Training the GP entails
jointly optimizing over inducing inputs and hyperparameters.
In this work, we aim to thoroughly investigate and characterise the difference in behaviour of the FITC
and VFE approximations. We investigate the biases of the bounds when learning hyperparameters,
where each method allocates its modelling capacity, and the optimisation behaviour. In Section 2
we briefly introduce inducing point methods and state the two algorithms using a unifying notation.
In Section 3 we discuss properties of the two approaches, both theoretical and practical. Our aim is
to understand the approximations in detail in order to know under which conditions each method is
likely to succeed or fail in practice. We highlight issues that may arise in practical situations and how
to diagnose and possibly avoid them. Some of the properties of the methods have been previously
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
reported in the literature; our aim here is a more complete and comparative approach. We draw
conclusions in Section 4.
2
Sparse Gaussian Processes
A Gaussian Process is a flexible distribution over functions, with many useful analytical properties. It
is fully determined by its mean m(x) and covariance k(x, x0 ) functions. We assume the mean to be
zero, without loss of generality. The covariance function determines properties of the functions, like
smoothness, amplitude, etc. A finite collection of function values at inputs {xi } follows a Gaussian
distribution N (f ; 0, Kff ), where [Kff ]ij = k(xi , xj ).
Here we revisit the GP model for regression [1]. We model the function of interest f (?) using a GP
prior, and noisy observations at the input locations X = {xi }i are observed in the vector y.
p(f ) = N (f ; 0, Kff )
p(y|f ) =
N
Y
n=1
N yn ; fn , ?n2
(1)
Throughout, we employ a squared exponential covariance function k(x, x0 ) = s2f exp(? 21 |x ?
x0 |2 /`2 ), but our results only rely on the decay of covariances with distance. The hyperparameter ?
contains the signal variance s2f , the lengthscale ` and the noise variance ?n2 , and is suppressed in the
notation.
To make predictions, we follow the common approach of first determining ? by optimising the
marginal likelihood and then marginalising over the posterior of f :
Z
p(y ? , y)
?
?
? = argmax p(y|?)
p(y |y) =
= p(y ? |f ? )p(f ? |f )p(f |y)df df ?
(2)
p(y)
?
While the marginal likelihood, the posterior and the predictive distribution all have closed-form
Gaussian expressions, the cost of evaluating them scales as O(N 3 ) due to the inversion of Kff + ?n2 I,
which is impractical for many datasets.
Over the years, the two inducing point methods that have remained most influential are FITC [7]
and VFE [9]. Unlike previously proposed methods (see [6, 10, 8]), both FITC and VFE provide an
approximation to the marginal likelihood which allows both the hyperparameters and inducing inputs
to be learned from the data through gradient based optimisation. Both methods rely on the low rank
?1
matrix Qff = Kfu Kuu
Kuf instead of the full rank Kff to reduce the size of any matrix inversion to
M . Note that for most covariance functions, the eigenvalues of Kuu are not bounded away from zero.
Any practical implementation will have to address this to avoid numerical instability. We follow the
common practice of adding a tiny diagonal jitter term ?I to Kuu before inverting.
2.1
Fully Independent Training Conditional (FITC)
Over the years, FITC has been formulated in several different ways. A form of FITC first appeared in
an online learning setting by Csat?o and Opper [11], derived from the viewpoint of approximating the
full GP posterior. Snelson and Ghahramani [7] introduced FITC as approximate inference in a model
with a modified likelihood and proposed using its marginal likelihood to train the hyperparameters and
inducing inputs jointly. An alternate interpretation where the prior is modified, but exact inference is
performed, was presented in [8], unifying it with other techniques. The latest interesting development
came with the connection that FITC can be obtained by approximating the GP posterior using
Expectation Propagation (EP) [12, 13, 14].
Using the interpretation of modifying the prior to
p(f ) = N (f ; 0, Qff + diag[Kff ? Qff ])
(3)
we obtain the objective function in Eq. (5). We would like to stress, however, that this modification
gives exactly the same procedure as approximating the full GP posterior with EP. Regardless of the
fact that that FITC can be seen as a completely different model, we aim to characterise it as an
approximation to the full GP.
2
2.2
Variational Free Energy (VFE)
Variational inference can also be used to approximate the true posterior. We follow the derivation
by Titsias [9] and bound the marginal likelihood, by instantiating extra function values on the latent
Gaussian process u at locations Z,1 followed by lower bounding the marginal likelihood. To ensure
efficient calculation, q(u, f ) is chosen to factorise as q(u)p(f |u). This removes terms with Kff?1 :
Z
p(y|f )
p(f
|u)p(u)
log p(y) ? q(u, f ) log
du df
(4)
p(f
|u)q(u)
The optimal q(u) can be found by variational calculus resulting in the lower bound in Eq. (5).
2.3
Common notation
The objective functions for both VFE and FITC look very similar. In the following discussion we
will refer to a common notation of their negative log marginal likelihood (NLML) F, which will be
minimised to train the methods:
1
1
1
N
log(2?) + log |Qff + G| + yT (Qff + G)?1 y + 2 tr(T ),
F=
(5)
2
2
2
2?
|
{z
} |
{z
} | n{z }
complexity penalty
where
data fit
GFITC = diag[Kff ? Qff ] + ?n2 I
TFITC = 0
trace term
GVFE = ?n2 I
TVFE = Kff ? Qff .
(6)
(7)
The common objective function has three terms, of which the data fit and complexity penalty have
direct analogues to the full GP. The data fit term penalises the data lying outside the covariance ellipse
Qff + G. The complexity penalty is the integral of the data fit term over all possible observations
y. It characterises the volume of possible datasets that are compatible with the data fit term. This
can be seen as the mechanism of Occam?s razor [16], by penalising the methods for being able to
predict too many datasets. The trace term in VFE ensures that the objective function is a true lower
bound to the marginal likelihood of the full GP. Without this term, VFE is identical to the earlier DTC
approximation [6] which can grossly over-estimate the marginal likelihood. The trace term penalises
the sum of the conditional variances at the training inputs, conditioned on the inducing inputs [17].
Intuitively, it ensures that VFE not only models this specific dataset y well, but also approximates the
covariance structure of the full GP Kff .
3
Comparative behaviour
As our main test case we use the one dimensional dataset2 considered in [7, 9] with 200 input-output
pairs. Of course, sparse methods are not necessary for this toy problem, but all of the issues we raise
are illustrated nicely in this one dimensional task which can easily be plotted. In Sections 3.1 to 3.3
we illustrate issues relating to the objecctive functions. These properties are independent of how the
method is optimised. However, whether they are encountered in practice can depend on optimiser
dynamics, which we discuss in Sections 3.4 and 3.5.
3.1
FITC can severely underestimate the noise variance, VFE overestimates it
In the full GP with Gaussian likelihood we assume a homoscedastic (input independent) noise model
with noise variance parameter ?n2 . It fully characterises the uncertainty left after completely learning
the latent function. In this section we show how FITC can also use the diagonal term diag(Kff ? Qff )
in GFITC as heteroscedastic (input dependent) noise [7] to account for these differences, thus,
invalidating the above interpretation of the noise variance parameter. In fact, the FITC objective
function encourages underestimation of the noise variance, whereas the VFE bound encourages
overestimation. The latter is in line with previously reported biases of variational methods [18].
Fig. 1 shows the configuration most preferred by the FITC objective for a subset of 100 data points
of the Snelson dataset, found by an exhaustive manual search for a minimum over hyperparameters,
1
2
Matthews et al. [15] show that this procedure approximates the posterior over the entire process f correctly.
Obtained from http://www.gatsby.ucl.ac.uk/~snelson/
3
inducing inputs and number of inducing points. The noise variance is shrunk to practically zero,
despite the mean prediction not going through every data point. Note how the mean still behaves well
and how the training data lie well within the predictive variance. Only when considering predictive
probabilities will this behaviour cause diminished performance. VFE, on the other hand, is able to
approximate the posterior predictive distribution almost exactly.
FITC (nlml = 23.16, ?n = 1.93 ? 10?4 )
VFE (nlml = 38.86, ?n = 0.286)
Figure 1: Behaviour of FITC and VFE on a subset of 100 data points of the Snelson dataset for 8
inducing inputs (red crosses indicate inducing inputs; red lines indicate mean and 2?) compared to
the prediction of the full GP in grey. Optimised values for the full GP: nlml = 34.15, ?n = 0.274
For both approximations, the complexity penalty decreases with decreased noise variance, by reducing
the volume of datasets that can be explained. For a full GP and VFE this is accompanied by a data
fit penalty for data points lying far away from the predictive mean. FITC, on the other hand, has an
additional mechanism to avoid this penalty: its diagonal correction term diag(Kff ? Qff ). This term
can be seen as an input dependent or heteroscedastic noise term (discussed as a modelling advantage
by Snelson and Ghahramani [7]), which is zero exactly at an inducing input, and which grows to the
prior variance away from an inducing input. By placing the inducing inputs near training data that
happen to lie near the mean, the heteroscedastic noise term is locally shrunk, resulting in a reduced
complexity penalty. Data points both far from the mean and far from inducing inputs do not incur a
data fit penalty, as the heteroscedastic noise term has increased around these points. This mechanism
removes the need for the homoscedastic noise to explain deviations from the mean, such that ?n2 can
be turned down to reduce the complexity penalty further.
This explains the extreme pinching (severely reduced noise variance) observed in Fig. 1, also see,
e.g., [9, Fig. 2]. In examples with more densely packed data, there may not be any places where a
near-zero noise point can be placed without incurring a huge data-fit penalty. However, inducing
inputs will be placed in places where the data happens to randomly cluster around the mean, which
still results in a decreased noise estimate, albeit less extreme, see Figs. 2 and 3 where we use all 200
data points.
Remark 1 FITC has an alternative mechanism to explain deviations from the learned function than
the likelihood noise and will underestimate ?n2 as a consequence. In extreme cases, ?n2 can incorrectly
be estimated to be almost zero.
As a consequence of this additional mechanism, ?n2 can no longer be interpreted in the same way
as for VFE or the full GP. ?n2 is often interpreted as the amount of uncertainty in the dataset which
can not be explained. Based on this interpretation, a low ?n2 is often used as an indication that the
dataset is being fitted well. Active learning applications rely on a similar interpretation to differentiate
between inherent noise, and uncertainty in the latent GP which can be reduced. FITC?s different
interpretation of ?n2 will cause efforts like these to fail.
VFE, on the other hand, is biased towards over-estimating the noise variance, because of both the data
fit and the trace term. Qff + ?n2 I has N ? M eigenvectors with an eigenvalue of ?n2 , since the rank of
Qff is M . Any component of y in these directions will result in a larger data fit penalty than for Kff ,
which can only be reduced by increasing ?n2 . The trace term can also be reduced by increasing ?n2 .
Remark 2 The VFE objective tends to over-estimate the noise variance compared to the full GP.
3.2
VFE improves with additional inducing inputs, FITC may ignore them
Here we investigate the behaviour of each method when more inducing inputs are added. For both
methods, adding an extra inducing input gives it an extra basis function to model the data with. We
discuss how and why VFE always improves, while FITC may deteriorate.
4
FITC
VFE
10
0
?F 0
?5
?10
?10
Figure 2: Top: Fits for FITC and VFE on 200 data points of the Snelson dataset for M = 7 optimised
inducing inputs (black). Bottom: Change in objective function from adding an inducing input
anywhere along the x-axis (no further hyperparameter optimisation performed). The overall change is
decomposed into the change in the individual terms (see legend). Two particular additional inducing
inputs and their effect on the predictive distribution shown in red and blue.
Fig. 2 shows an example of how the objective function changes when an inducing input is added
anywhere in the input domain. While the change in objective function looks reasonably smooth
overall, there are pronounced spikes for both, FITC and VFE. These return the objective to the value
without the additional inducing input and occur at the locations of existing inducing inputs. We
discuss the general change first before explaining the spikes.
Mathematically, adding an inducing input corresponds to a rank 1 update of Qff , and can be shown to
always improve VFE?s bound3 , see Supplement for a proof. VFE?s complexity penalty increases due
to an extra non-zero eigenvalue in Qff , but gains in data fit and trace.
Remark 3 VFE?s posterior and marginal likelihood approximation become more accurate (or remain
unchanged) regardless of where a new inducing input is placed.
For FITC, the objective can change either way. Regardless of the change in objective, the heteroscedastic noise is decreased at all points (see Supplement for proof). For a squared exponential kernel,
the decrease is strongest around the newly placed inducing input. This decrease has two effects. One,
it reduces the complexity penalty since the diagonal component of Qff + G is reduced and replaced
by a more strongly correlated Qff . Two, it worsens the data fit term as the heteroscedastic term is
required to fit the data when the homoscedastic noise is underestimated. Fig. 2 shows reduced error
bars with several data points now outside of the 95% prediction bars. Also shown is a case where an
additional inducing input improves the objective, where the extra correlations outweigh the reduced
heteroscedastic noise.
Both VFE and FITC exhibit pathological behaviour (spikes) when inducing inputs are clumped, that
is, when they are placed exactly on top of each other. In this case, the objective function has the
same value as when all duplicate inducing inputs were removed, see Supplement for a proof. In other
words, for all practical purposes, a model with duplicate inducing inputs reduces to a model with
fewer, individually placed inducing inputs.
Theoretically, these pathologies only occur at single points, such that no gradients towards or away
from them could exist and they would never be encountered. In practise, however, these peaks
are widend by a finite jitter that is added to Kuu to ensure it remains well conditioned enough
to be invertible. This finite width provides the gradients that allow an optimiser to detect these
configurations.
As VFE always improves with additional inducing inputs, these configurations must correspond to
maxima of the optimisation surface and clumping of inducing inputs does not occur for VFE. For
3
Matthews [19] independently proved this result by considering the KL divergence between processes. Titsias
[9] proved this result for the special case when the new inducing input is selected from the training data.
5
FITC, configurations with clumped inducing inputs can and often do correspond to minima of the
optimisation surface. By placing them on top of each other, FITC can avoid the penalty of adding
an extra inducing input and can gain the bonus from the heteroscedastic noise. Clumping, thus,
constitutes a mechanism that allows FITC to effectively remove inducing inputs at no cost.
We illustrate this behaviour in Fig. 3 for 15 randomly initialised inducing inputs. FITC places some
of them exactly on top of each other, whereas VFE spreads them out and recovers the full GP well.
FITC
VFE
Figure 3: Fits for 15 inducing inputs for FITC and VFE (initial as black crosses, optimised red
crosses). Even following joint optimisation of inducing inputs and hyperparameters, FITC avoids the
penalty of added inducing inputs by clumping some of them on top of each other (shown as a single
red cross). VFE spreads out the inducing inputs to get closer to the true full GP posterior.
Remark 4 In FITC, having a good approximation Qff to Kff needs to be traded off with the gains
coming from the heteroscedastic noise. FITC does not always favour a more accurate approximation
to the GP.
Remark 5 FITC avoids losing the gains of the heteroscedastic noise by placing inducing inputs on
top of each other, effectively removing them.
3.3
FITC does not recover the full GP posterior, VFE does
In the previous section we showed that FITC may not utilise additional resources to model the data.
The clumping behaviour, thus, explains why the FITC objective may not recover the full GP, even
when given enough resources.
Method
nlml initial
nlml optimised
Full GP
VFE
FITC
?
33.8923
33.8923
33.8923
33.8923
28.3869
optimised
Both VFE and FITC can recover the true posterior by placing an inducing input on every training
input [9, 12]. For VFE, this is a global minimum, since the KL gap to the true marginal likelihood is
zero. For FITC, however, this configuration is not stable and the objective can still be improved by
clumping of inducing inputs, as Matthews [19] has shown empirically by aggressive optimisation.
The derivative of the inducing inputs is zero for the initial configuration, but adding jitter subtly
makes this behaviour more obvious by perturbing the gradients, similar to the widening of the peaks
in Fig. 2. In Fig. 4 we reproduce the observations in [19, Sec 4.6.1 and Fig. 4.2] on a subset of 100
data points of the Snelson dataset: VFE remains at the minimum and, thus, recovers the full GP,
whereas FITC improves its objective and clumps the inducing inputs considerably.
8
6
4
2
0
VFE FITC
0
2
6
initial 4
Figure 4: Results of optimising VFE and FITC after initialising at the solution that gives the correct
posterior and marginal likelihood as in [19, Sec 4.6.1]: FITC moves to a significantly different
solution with better objective value (Table, left) and clumped inducing inputs (Figure, right).
Remark 6 FITC generally does not recover the full GP, even when it has enough resources.
3.4
FITC relies on local optima
So far, we have observed some cases where FITC fails to produce results in line with the full GP, and
characterised why. However, in practice, FITC has performed well, and pathological behaviour is not
always observed. In this section we discuss the optimiser dynamics and show that they help FITC
behave reasonably.
6
To demonstrate this behaviour, we consider a 4d toy dataset: 1024 training and 1024 test samples
drawn from a 4d Gaussian Process with isotropic squared exponential covariance function (l =
1.5, sf = 1) and true noise variance ?n2 = 0.01. The data inputs were drawn from a Gaussian centred
around the origin, but similar results were obtained for uniformly sampled inputs. We fit both FITC
and VFE to this dataset with the number of inducing inputs ranging from 16 to 1024, and compare a
representative run to the full GP in Fig. 5.
5
?102
Neg. log pred. prob.
Optimised ?n
NLML
10?1
0
24
27
210
# inducing inputs
0
8
?0.2
6
?0.4
10?3
?5
27
210
?0.8
# inducing inputs
SMSE
GP
FITC
VFE
4
?0.6
24
?10?2
24
27
210
# inducing inputs
2
24
27
210
# inducing inputs
Figure 5: Optimisation behaviour of VFE and FITC for varying number of inducing inputs compared
to the full GP. We show the objective function (negative log marginal likelihood), the optimised noise
?n , the negative log predictive probability and standardised mean squared error as defined in [1].
VFE monotonically approaches the values of the full GP but initially overestimates the noise variance,
as discussed in Section 3.1. Conversely, we can identify three regimes for the objective function of
FITC: 1) Monotonic improvement for few inducing inputs, 2) a region where FITC over-estimates
the marginal likelihood, and 3) recovery towards the full GP for many inducing inputs. Predictive
performance follows a similar trend, first improving, then declining while the bound is estimated to
be too high, followed by a recovery. The recovery is counter to the usual intuition that over-fitting
worsens when adding more parameters.
We explain the behaviour in these three regimes as follows: When the number of inducing inputs
are severely limited (regime 1), FITC needs to place them such that Kff is well approximated. This
correlates most points to some degree, and ensures a reasonable data fit term. The marginal likelihood
is under-estimated due to lack of a flexibility in Qff . This behaviour is consistent with the intuition
that limiting model capacity prevents overfitting.
As the number of inducing inputs increases (regime 2), the marginal likelihood is over-estimated and
the noise drastically under-estimated. Additionally, performance in terms of log predictive probability
deteriorates. This is the regime closest to FITC?s behaviour in Fig. 1. There are enough inducing
inputs such that they can be placed such that a bonus can be gained from the heteroscedastic noise,
without gaining a complexity penalty from losing long scale correlations.
Finally, in regime 3, FITC starts to behave more like a regular GP in terms of marginal likelihood,
predictive performance and noise variance parameter ?n . FITC?s ability to use heteroscedastic noise
is reduced as the approximate covariance matrix Qff is closer to the true covariance matrix Kff when
many (initial) inducing input are spread over the input space.
In the previous section we showed that after adding a new inducing input, a better minimum obtained
without the extra inducing input could be recovered by clumping. So it is clear that the minimum that
was found with fewer active inducing inputs still exists in the optimisation surface of many inducing
inputs; the optimiser just does not find it.
Remark 7 When running FITC with many inducing inputs its resemblance to the full GP solution
relies on local optima, rather than the objective function changing.
3.5
VFE is hindered by local optima
So far we have seen that the VFE objective function is a true lower bound on the marginal likelihood
and does not share the same pathologies as FITC. Thus, when optimising, we really are interested in
finding a global optimum. The VFE objective function is not completely trivial to optimise, and often
tricks, such as initialising the inducing inputs with k-means and initially fixing the hyperparameters
7
[20, 21], are required to find a good optimum. Others have commented that VFE has the tendency to
underfit [3]. Here we investigate the underfitting claim and relate it to optimisation behaviour.
As this behaviour is not observable in our 1D dataset, we illustrate it on the pumadyn32nm dataset4
(32 dimensions, 7168 training, 1024 test), see Table 1 for the results of a representative run with
random initial conditions and M = 40 inducing inputs.
Method
NLML/N
?n
inv. lengthscales
RMSE
GP (SoD)
?0.099
0.196
???
0.209
FITC
?0.145
0.004
???
0.212
VFE
1.419
1
???
0.979
VFE (frozen)
0.151
0.278
???
0.276
VFE (init FITC)
?0.096
0.213
???
0.212
Table 1: Results for pumadyn32nm dataset. We show negative log marginal likelihood (NLML)
divided by number of training points, the optimised noise variance ?n2 , the ten most dominant inverse
lengthscales and the RMSE on test data. Methods are full GP on 2048 training samples, FITC, VFE,
VFE with initially frozen hyperparameters, VFE initialised with the solution obtained by FITC.
Using a squared exponential ARD kernel with separate lengthscales for every dimension, a full GP
on a subset of data identified four lengthscales as important to model the data while scaling the other
28 lengthscales to large values (in Table 1 we plot the inverse lengthscales).
FITC was consistently able to identify the same four lengthscales and performed similarly compared
to the full GP but scaled down the noise variance ?n2 to almost zero. The latter is consistent with our
earlier observations of strong pinching in a regime with low-density data as is the case here due to
the high dimensionality. VFE, on the other hand, was unable to identify these relevant lengthscales
when jointly optimising the hyperparameters and inducing inputs, and only identified some of the
them when initially freezing the hyperparameters. One might say that VFE ?underfits? in this case.
However, we can show that VFE still recognises a good solution: When we initialised VFE with the
FITC solution it consistently obtained a good fit to the model with correctly identified lengthscales
and a noise variance that was close to the full GP.
Remark 8 VFE has a tendency to find under-fitting solutions. However, this is an optimisation issue.
The bound correctly identifies good solutions.
4
Conclusion
In this work, we have thoroughly investigated and characterised the differences between FITC
and VFE, both in terms of their objective function and their behaviour observed during practical
optimisation. We highlight several instances of undesirable behaviour in the FITC objective: overestimation of the marginal likelihood, sometimes severe under-estimation of the noise variance
parameter, wasting of modelling resources and not recovering the true posterior. The common
practice of using the noise variance parameter as a diagnostic for good model fitting is unreliable.
In contrast, VFE is a true bound to the marginal likelihood of the full GP and behaves predictably:
It correctly identifies good solutions, always improves with extra resources and recovers the true
posterior when possible. In practice however, the pathologies of the FITC objective do not always
show up, thanks to ?good? local optima and (unintentional) early stopping. While VFE?s objective
recognises a good configuration, it is often more susceptible to local optima and harder to optimise
than FITC.
Which of these pathologies show up in practise depends on the dataset in question. However, based
on the superior properties of the VFE objective function, we recommend using VFE, while paying
attention to optimisation difficulties. These can be mitigated by careful initialisation, random restarts,
other optimisation tricks and comparison to the FITC solution to guide VFE optimisation.
Acknowledgements
We would like to thank Alexander Matthews, Thang Bui, and Richard Turner for useful discussions.
4
obtained from http://www.cs.toronto.edu/~delve/data/datasets.html
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation
and Machine Learning series). The MIT Press, 2005.
M. L?azaro-Gredilla, J. Qui?nonero-Candela, C. E. Rasmussen and A. R. Figueiras-Vidal. ?Sparse spectrum
Gaussian process regression?. In: The Journal of Machine Learning Research 11 (2010).
M. L?azaro-Aredilla and A. Figueiras-Vidal. ?Inter-domain Gaussian processes for sparse inference using
inducing features?. In: Advances in Neural Information Processing Systems. 2009.
A. Rahimi and B. Recht. ?Weighted sums of random kitchen sinks: Replacing minimization with
randomization in learning?. In: Advances in Neural Information Processing Systems. 2009.
Z. Yang, A. J. Smola, L. Song and A. G. Wilson. ?A la Carte - Learning Fast Kernels?. In: Artificial
Intelligence and Statistics. 2015. eprint: 1412.6493.
M. Seeger, C. K. I. Williams and N. D. Lawrence. ?Fast Forward Selection to Speed Up Sparse Gaussian
Process Regression?. In: Proceedings of the Ninth International Workshop on Artificial Intelligence and
Statistics. 2003.
E. Snelson and Z. Ghahramani. ?Sparse Gaussian Processes using Pseudo-inputs?. In: Neural Information
Processing Systems. Vol. 18. 2006.
J. Qui?nonero-Candela and C. E. Rasmussen. ?A unifying view of sparse approximate Gaussian process
regression?. In: The Journal of Machine Learning Research 6 (2005).
M. K. Titsias. ?Variational learning of inducing variables in sparse Gaussian processes?. In: Proceedings
of the Twelfth International Conference on Artificial Intelligence and Statistics. 2009.
A. J. Smola and P. Bartlett. ?Sparse greedy Gaussian process regression?. In: Advances in Neural
Information Processing Systems 13. 2001.
L. Csat?o and M. Opper. ?Sparse on-line Gaussian processes?. In: Neural computation 14.3 (2002).
E. Snelson. ?Flexible and efficient Gaussian process models for machine learning?. PhD thesis. University
College London, 2007.
Y. Qi, A. H. Abdel-Gawad and T. P. Minka. ?Sparse-posterior Gaussian Processes for general likelihoods?.
In: Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence. 2010.
T. D. Bui, J. Yan and R. E. Turner. ?A Unifying Framework for Sparse Gaussian Process Approximation
using Power Expectation Propagation?. In: (2016). eprint: 1605.07066.
A. Matthews, J. Hensman, R. E. Turner and Z. Ghahramani. ?On Sparse variational methods and
the Kullback-Leibler divergence between stochastic processes?. In: Proceedings of the Nineteenth
International Conference on Artificial Intelligence and Statistics. 2016. eprint: 1504.07027.
C. E. Rasmussen and Z. Ghahramani. ?Occam?s Razor?. In: Advances in Neural Information Processing
Systems 13. 2001.
M. K. Titsias. Variational Model Selection for Sparse Gaussian Process Regression. Tech. rep. University
of Manchester, 2009.
R. E. Turner and M. Sahani. ?Two problems with variational expectation maximisation for time-series
models?. In: Bayesian Time series models. Cambridge University Press, 2011. Chap. 5.
A. Matthews. ?Scalable Gaussian process inference using variational methods?. PhD thesis. University of
Cambridge, 2016.
J. Hensman, A. Matthews and Z. Ghahramani. ?Scalable Variational Gaussian Process Classification?. In:
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics. 2015.
eprint: 1411.2005.
J. Hensman, N. Fusi and N. D. Lawrence. ?Gaussian Processes for Big Data?. In: Conference on
Uncertainty in Artificial Intelligence. 2013. eprint: 1309.6835.
9
| 6477 |@word worsens:2 briefly:1 inversion:2 twelfth:1 calculus:1 grey:1 covariance:10 tr:1 harder:1 initial:6 configuration:7 contains:1 series:3 initialisation:1 kuf:1 existing:1 recovered:1 must:1 fn:1 numerical:1 happen:1 remove:3 plot:1 update:1 clumping:6 intelligence:7 prohibitive:1 fewer:2 selected:1 greedy:1 isotropic:1 kff:16 provides:1 location:4 penalises:2 toronto:1 along:1 direct:1 become:1 fitting:3 underfitting:1 introduce:1 deteriorate:1 theoretically:1 x0:3 inter:1 decomposed:1 chap:1 considering:2 increasing:2 spain:1 estimating:1 notation:4 bounded:1 bonus:2 mitigated:1 interpreted:2 finding:1 impractical:1 wasting:1 pseudo:2 every:3 exactly:5 scaled:1 uk:3 yn:1 planck:1 overestimate:2 before:2 engineering:1 local:5 tends:1 severely:3 consequence:2 despite:2 optimised:10 approximately:1 black:2 might:1 conversely:1 heteroscedastic:12 delve:1 limited:1 clump:1 practical:8 practice:7 maximisation:1 procedure:2 yan:1 significantly:1 word:1 regular:1 get:1 close:1 undesirable:1 selection:2 instability:1 www:2 outweigh:1 deterministic:1 gawad:1 yt:1 eighteenth:1 latest:1 regardless:3 attention:1 independently:1 williams:2 recovery:3 deriving:1 limiting:1 exact:3 losing:2 carl:1 gps:3 origin:1 trick:2 trend:1 approximated:1 pinching:2 observed:5 ep:2 bottom:1 region:1 ensures:3 trade:1 decrease:3 removed:1 counter:1 intuition:2 complexity:9 overestimation:2 practise:2 cam:1 dynamic:2 raise:1 depend:1 predictive:10 titsias:4 incur:1 subtly:1 completely:3 basis:1 sink:1 easily:1 joint:1 differently:2 derivation:1 train:2 fast:2 lengthscale:1 london:1 artificial:7 outside:2 lengthscales:9 exhaustive:1 larger:1 nineteenth:1 say:1 ability:1 statistic:5 gp:39 jointly:3 noisy:1 online:1 differentiate:1 advantage:1 eigenvalue:3 indication:1 matthias:1 analytical:1 ucl:1 frozen:2 coming:1 turned:1 relevant:1 nonero:2 flexibility:1 inducing:74 pronounced:1 figueiras:2 manchester:1 cluster:1 requirement:2 optimum:7 produce:1 comparative:2 help:1 illustrate:3 ac:2 fixing:1 ard:1 ij:1 progress:1 paying:1 strong:1 eq:2 recovering:1 c:1 edward:1 vfe:66 indicate:2 differ:1 direction:1 dtc:2 correct:1 modifying:1 shrunk:2 stochastic:1 explains:2 behaviour:20 really:1 randomization:1 standardised:1 mathematically:1 correction:1 lying:2 practically:1 considered:1 around:4 exp:1 lawrence:2 predict:1 traded:1 claim:1 matthew:7 early:1 homoscedastic:3 purpose:1 estimation:1 individually:1 weighted:1 minimization:1 carte:1 mit:1 gaussian:27 always:7 aim:4 modified:2 rather:1 avoid:4 varying:1 wilson:1 derived:1 focus:1 improvement:1 consistently:2 modelling:3 likelihood:25 rank:4 tech:1 contrast:1 seeger:1 detect:1 kfu:1 inference:7 dependent:2 s2f:2 stopping:1 typically:1 entire:2 initially:4 going:1 reproduce:1 interested:1 germany:1 issue:4 overall:2 flexible:3 html:1 classification:1 development:1 special:1 marginal:21 never:1 nicely:1 having:1 thang:1 optimising:4 identical:1 placing:4 look:2 constitutes:1 others:1 recommend:1 intelligent:1 inherent:1 employ:1 duplicate:2 few:1 pathological:2 randomly:2 richard:1 densely:1 divergence:2 individual:1 replaced:1 argmax:1 kitchen:1 factorise:1 interest:1 cer54:1 huge:1 investigate:5 severe:1 extreme:3 accurate:2 integral:1 closer:2 necessary:1 allocates:1 plotted:1 theoretical:3 fitted:1 increased:1 instance:1 earlier:2 cost:4 deviation:2 subset:4 sod:1 too:2 reported:2 considerably:1 thoroughly:3 thanks:1 density:1 peak:2 recht:1 international:4 retain:1 probabilistic:2 off:2 minimised:1 invertible:1 squared:5 thesis:2 possibly:1 derivative:1 return:1 toy:2 account:1 aggressive:1 accompanied:1 centred:1 sec:2 depends:1 performed:4 view:1 diagnose:1 closed:1 candela:2 red:5 start:1 recover:4 rmse:2 accuracy:1 variance:22 correspond:2 identify:3 bayesian:1 explain:3 strongest:1 manual:1 sixth:1 grossly:1 energy:3 underestimate:2 initialised:3 minka:1 obvious:1 proof:3 recovers:3 gain:4 newly:1 dataset:12 proved:2 popular:1 sampled:1 penalising:1 improves:6 dimensionality:1 underfits:1 amplitude:1 follow:3 restarts:1 improved:1 strongly:1 generality:1 marginalising:1 anywhere:2 just:1 smola:2 correlation:2 hand:4 freezing:1 replacing:1 propagation:2 lack:1 perhaps:1 resemblance:1 grows:1 effect:2 true:11 analytically:1 leibler:1 illustrated:1 during:1 width:1 encourages:2 razor:2 illustrative:1 prominent:1 stress:1 complete:1 demonstrate:1 ranging:1 variational:12 snelson:9 common:6 superior:1 behaves:2 empirically:1 perturbing:1 volume:2 discussed:2 interpretation:6 approximates:2 relating:1 refer:1 cambridge:4 declining:1 smoothness:1 similarly:1 clumped:3 pathology:4 stable:1 entail:1 similarity:1 longer:1 surface:3 etc:1 dominant:1 posterior:18 closest:1 recent:2 showed:2 optimizing:1 certain:1 ubingen:1 dataset2:1 came:1 rep:1 der:1 optimiser:4 seen:4 minimum:6 fortunately:1 additional:8 neg:1 focussing:1 monotonically:1 signal:1 full:31 reduces:2 rahimi:1 smooth:1 calculation:1 cross:4 long:1 divided:1 qi:1 prediction:4 instantiating:1 regression:7 scalable:2 optimisation:15 expectation:3 df:3 represent:2 kernel:3 sometimes:1 whereas:3 decreased:3 underestimated:1 extra:8 biased:1 unlike:1 eprint:5 legend:1 near:3 yang:1 enough:4 xj:1 fit:18 identified:3 hindered:1 reduce:2 idea:1 favour:1 whether:1 expression:1 handled:1 bartlett:1 effort:1 penalty:16 song:1 speaking:1 kuu:4 cause:2 remark:8 generally:2 useful:2 clear:1 eigenvectors:1 characterise:2 amount:1 locally:1 ten:1 reduced:9 http:2 exist:1 revisit:1 estimated:5 deteriorates:1 correctly:4 csat:2 diagnostic:1 blue:1 broadly:1 hyperparameter:2 vol:1 commented:1 four:2 drawn:2 changing:1 year:2 sum:2 run:2 prob:1 inverse:2 jitter:3 uncertainty:5 place:4 throughout:1 almost:3 reasonable:1 draw:2 fusi:1 initialising:2 scaling:1 qui:2 bound:9 followed:2 encountered:2 occur:3 speed:1 department:1 developing:1 influential:1 alternate:1 gredilla:1 unintentional:1 remain:1 suppressed:1 modification:1 happens:1 intuitively:1 explained:2 resource:5 previously:3 remains:2 discus:5 fail:2 mechanism:6 know:1 incurring:1 vidal:2 away:4 spectral:1 alternative:1 top:6 running:1 include:1 ensure:2 unifying:4 ghahramani:6 ellipse:1 approximating:3 unchanged:1 objective:29 move:1 added:4 quantity:2 spike:3 recognises:2 question:1 usual:1 diagonal:4 exhibit:1 gradient:4 distance:1 separate:1 unable:1 thank:1 capacity:2 trivial:1 dataset4:1 susceptible:1 relate:1 trace:6 negative:4 implementation:2 packed:1 twenty:1 observation:4 datasets:6 wilk:1 finite:3 behave:3 incorrectly:1 situation:1 ninth:1 mv310:1 inv:1 introduced:1 inverting:1 pair:1 required:2 kl:2 pred:1 connection:1 bound3:1 learned:2 barcelona:1 nip:1 address:1 able:3 bar:2 smse:1 appeared:1 regime:7 max:1 memory:2 gaining:1 optimise:2 analogue:1 power:1 widening:1 rely:4 difficulty:1 turner:4 fitc:75 improve:1 identifies:2 axis:1 sahani:1 review:1 understanding:1 literature:1 prior:4 characterises:2 acknowledgement:1 determining:1 fully:5 loss:1 highlight:2 interesting:1 limitation:1 abdel:1 degree:1 consistent:2 viewpoint:1 tiny:1 occam:2 share:1 compatible:1 course:1 surprisingly:1 placed:7 rasmussen:5 free:3 drastically:1 guide:2 allow:2 bias:2 understand:1 institute:1 explaining:1 sparse:18 bauer:1 van:1 opper:2 dimension:2 evaluating:1 avoids:2 hensman:3 forward:1 made:1 collection:1 adaptive:1 far:5 correlate:1 approximate:6 observable:1 ignore:1 preferred:1 kullback:1 bui:2 unreliable:1 global:2 active:2 overfitting:1 predictably:1 xi:3 spectrum:1 search:1 latent:3 why:3 table:4 additionally:1 reasonably:2 superficial:1 init:1 improving:1 expansion:1 du:1 investigated:1 domain:2 diag:4 main:1 spread:3 bounding:1 noise:38 hyperparameters:10 arise:1 n2:20 underfit:1 big:1 fig:12 representative:2 gatsby:1 fails:1 exponential:4 sf:1 lie:2 qff:19 down:2 remained:1 removing:1 specific:1 invalidating:1 favourable:1 decay:1 essential:1 exists:1 workshop:1 albeit:1 adding:8 effectively:2 gained:1 supplement:3 phd:2 conditioned:2 gap:1 azaro:2 likely:1 prevents:1 monotonic:1 corresponds:1 utilise:1 determines:1 relies:2 succeed:1 conditional:5 formulated:1 careful:1 towards:3 change:8 diminished:1 determined:1 characterised:2 reducing:1 uniformly:1 tendency:2 la:1 underestimation:1 college:1 mark:1 latter:3 alexander:1 correlated:1 |
6,055 | 6,478 | Fast and Provably Good Seedings for k-Means
Olivier Bachem
Department of Computer Science
ETH Zurich
olivier.bachem@inf.ethz.ch
Mario Lucic
Department of Computer Science
ETH Zurich
lucic@inf.ethz.ch
S. Hamed Hassani
Department of Computer Science
ETH Zurich
hamed@inf.ethz.ch
Andreas Krause
Department of Computer Science
ETH Zurich
krausea@ethz.ch
Abstract
Seeding ? the task of finding initial cluster centers ? is critical in obtaining highquality clusterings for k-Means. However, k-means++ seeding, the state of the
art algorithm, does not scale well to massive datasets as it is inherently sequential
and requires k full passes through the data. It was recently shown that Markov
chain Monte Carlo sampling can be used to efficiently approximate the seeding
step of k-means++. However, this result requires assumptions on the data generating distribution. We propose a simple yet fast seeding algorithm that produces
provably good clusterings even without assumptions on the data. Our analysis
shows that the algorithm allows for a favourable trade-off between solution quality
and computational cost, speeding up k-means++ seeding by up to several orders
of magnitude. We validate our theoretical results in extensive experiments on a
variety of real-world data sets.
1
Introduction
k-means++ (Arthur & Vassilvitskii, 2007) is one of the most widely used methods to solve k-Means
clustering. The algorithm is simple and consists of two steps: In the seeding step, initial cluster
centers are found using an adaptive sampling scheme called D2 -sampling. In the second step, this
solution is refined using Lloyd?s algorithm (Lloyd, 1982), the classic iterative algorithm for k-Means.
The key advantages of k-means++ are its strong empirical performance, theoretical guarantees on
the solution quality, and ease of use. Arthur & Vassilvitskii (2007) show that k-means++ produces
clusterings that are in expectation O(log k)-competitive with the optimal solution without any
assumptions on the data. Furthermore, this theoretical guarantee already holds after the seeding
step. The subsequent use of Lloyd?s algorithm to refine the solution only guarantees that the solution
quality does not deteriorate and that it converges to a locally optimal solution in finite time. In
contrast, using naive seeding such as selecting data points uniformly at random followed by Lloyd?s
algorithm can produce solutions that are arbitrarily bad compared to the optimal solution.
The drawback of k-means++ is that it does not scale easily to massive data sets since both its
seeding step and every iteration of Lloyd?s algorithm require the computation of all pairwise distances
between cluster centers and data points. Lloyd?s algorithm can be parallelized in the MapReduce
framework (Zhao et al., 2009) or even replaced by fast stochastic optimization techniques such as
online or mini-batch k-Means (Bottou & Bengio, 1994; Sculley, 2010). However, the seeding step
requires k inherently sequential passes through the data, making it impractical even for moderate k.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This highlights the need for a fast and scalable seeding algorithm. Ideally, it should also retain the
theoretical guarantees of k-means++ and provide equally competitive clusterings in practice. Such
an approach was presented by Bachem et al. (2016) who propose to approximate k-means++ using a
Markov chain Monte Carlo (MCMC) approach and provide a fast seeding algorithm. Under natural
assumptions on the data generating distribution, the authors show that the computational complexity
of k-means++ can be greatly decreased while retaining the same O(log k) guarantee on the solution
quality. The drawback of this approach is that these assumptions may not hold and that checking
their validity is expensive (see detailed discussion in Section 3).
Our contributions. The goal of this paper is to provide fast and competitive seedings for k-Means
clustering without prior assumptions on the data. As our key contributions, we
(1)
(2)
(3)
(4)
(5)
(6)
propose a simple yet fast seeding algorithm for k-Means,
show that it produces provably good clusterings without assumptions on the data,
provide stronger theoretical guarantees under assumptions on the data generating distribution,
extend the algorithm to arbitrary distance metrics and various divergence measures,
compare the algorithm to previous results, both theoretically and empirically, and
demonstrate its effectiveness on several real-world data sets.
2
Background and related work
We will start by formalizing the problem and reviewing several recent results. Let X denote a set of
n points in Rd . For any finite set C ? Rd and x 2 X , we define
d(x, C)2 = minkx
c2C
ck22 .
The objective of k-Means clustering is to find a set C of k cluster centers in Rd such that the
quantization error C (X ) is minimized, where
X
d(x, C)2 .
C (X ) =
x2X
We denote the optimal quantization
by kOP T (X ), the mean of X by ?(X ), and
P error with k centers
2
the variance of X by Var(X ) = x2X d(x, ?(X )) . We note that 1OP T (X ) = Var(X ).
D2 -sampling. Given a set of centers C, the D2 -sampling strategy, as the name suggests, is to sample
each point x 2 X with probability proportional to the squared distance to the selected centers,
p(x | C) = P
d(x, C)2
.
0
2
x0 2X d(x , C)
(1)
The seeding step of k-means++ builds upon D2 -sampling: It first samples an initial center uniformly
at random. Then, k 1 additional centers are sequentially added to the previously sampled centers
using D2 -sampling. The resulting computational complexity is ?(nkd), as for each x 2 X the
distance d(x, C)2 in (1) needs to be updated whenever a center is added to C.
Metropolis-Hastings. The Metropolis-Hastings algorithm (Hastings, 1970) is a MCMC method for
sampling from a probability distribution p(x) whose density is known only up to constants. Consider
the following variant that uses an independent proposal distribution q(x) to build a Markov chain:
Start with an arbitrary initial state x1 and in each iteration j 2 [2, . . . , m] sample a candidate yj using
q(x). Then, either accept this candidate (i.e., xj = yj ) with probability
?
?
p(yj ) q(xj 1 )
?(xj 1 , yj ) = min
,1
(2)
p(xj 1 ) q(yj )
or reject it otherwise (i.e., xj = xj 1 ). The stationary distribution of this Markov chain is p(x).
Hence, for m sufficiently large, the distribution of xm is approximately p(x).
Approximation using MCMC (K - MC2 ). Bachem et al. (2016) propose to speed up k-means++ by
replacing the exact D2 -sampling in (1) with a fast approximation based on MCMC sampling. In each
iteration j 2 [2, 3, . . . , k], one constructs a Markov chain of length m using the Metropolis-Hasting
2
algorithm with an independent and uniform proposal distribution q(x) = 1/n. The key advantage is
that the acceptance probability in (2) only depends on d(yj , C)2 and d(xj 1 , C)2 since
?
?
?
?
d(yj , C)2
p(yj ) q(xj 1 )
min
, 1 = min
,1 .
p(xj 1 ) q(yj )
d(xj 1 , C)2
Critically, in each of the k 1 iterations, the algorithm does not require a full pass through the data,
but only needs to compute the distances between m points and up to k 1 centers. As a consequence,
the complexity of K - MC2 is O mk 2 d compared to O(nkd) for k-means++ seeding.
To bound the quality of the solutions produced by K - MC2 , Bachem et al. (2016) analyze the mixing
time of the described Markov chains. To this end, the authors define the two data-dependent quantities:
?(X ) = max P
x2X
d(x, ?(X ))2
,
0
2
x0 2X d(x , ?(X ))
and
(X ) =
1
OP T (X )
.
k
OP T (X )
(3)
In order to bound each term, the authors assume that the data is generated i.i.d. from a distribution F
and impose two conditions on F . First, they assume that F exhibits exponential tails and prove that
in this case ?(X ) 2 O log2 n with high probability. Second, they assume that ?F is approximately
uniform on a hypersphere?. This in turn implies that (X ) 2 O(k) with high probability. Under
these assumptions, the authors prove that the solution generated by K - MC2 is in expectation O(log k)competitive with the optimal solution if m 2 ? k log2 n log k . In this case, the total computational
complexity of K - MC2 is O k 3 d log2 n log k which is sublinear in the number of data points.
Other related work. A survey on seeding methods for k-Means was provided by Celebi et al.
(2013). D2 -sampling and k-means++ have been extensively studied in the literature. Previous work
was primarily focused on related algorithms (Arthur & Vassilvitskii, 2007; Ostrovsky et al., 2006;
Jaiswal et al., 2014, 2015), its theoretical properties (Ailon et al., 2009; Aggarwal et al., 2009) and
bad instances (Arthur & Vassilvitskii, 2007; Brunsch & R?glin, 2011). As such, these results are
complementary to the ones presented in this paper.
An alternative approach to scalable seeding was investigated by Bahmani et al. (2012). The authors propose the k-meansk algorithm that retains the same O(log k) guarantee in expectation as
k-means++. k-meansk reduces the number of sequential passes through the data to O(log n) by
oversampling cluster centers in each of the rounds. While this allows one to parallelize each of the
O(log n) rounds, it also increases the total computational complexity from O(nkd) to O(nkd log n).
This method is feasible if substantial computational resources are available in the form of a cluster.
Our approach, on the other hand, has an orthogonal use case: It aims to efficiently approximate
k-means++ seeding with a substantially lower complexity.
3
Assumption-free K-MC2
Building on the MCMC strategy introduced by Bachem et al. (2016), we propose an algorithm which
addresses the drawbacks of the K - MC2 algorithm, namely:
(1) The theoretical results of K - MC2 hold only if the data is drawn independently from a distribution
satisfying the assumptions stated in Section 2. For example, the results do not extend to heavytailed distributions which are often observed in real world data.
(2) Verifying the assumptions, which in turn imply the required chain length, is computationally hard
and potentially more expensive than running the algorithm. In fact, calculating ?(X ) already
requires two full passes through the data, while computing (X ) is NP-hard.
(3) Theorem 2 of Bachem et al. (2016) does not characterize the tradeoff between m and the expected
solution quality: It is only valid for the specific choice of chain length m = ? k log2 n log k .
As a consequence, if the assumptions do not hold, we obtain no theoretical guarantee with regards
to the solution quality. Furthermore, the constants in Theorem 2 are not known and may be large.
Our approach addresses these shortcomings using three key elements. Firstly, we provide a proposal
distribution that renders the assumption on ?(X ) obsolete. Secondly, a novel theoretic analysis
allows us to obtain theoretical guarantees on the solution quality even without assumptions on (X ).
Finally, our results characterize the tradeoff between increasing the chain length m and improving
the expected solution quality.
3
Algorithm 1 A SSUMPTION - FREE K - MC2 (AFK - MC2 )
Require: Data set X , # of centers k, chain length m
// Preprocessing step
1: c1
Point uniformly sampled from X
2: for all x 2 X do
P
1
1
2
0
2
3:
q(x)
x0 2X d(x , c1 ) + 2n
2 d(x, c1 ) /
// Main loop
4: C1
{c1 }
5: for i = 2, 3, . . . , k do
6:
x
Point sampled from X using q(x)
7:
dx
d(x, Ci 1 )2
8:
for j = 2, 3, . . . , m do
9:
y
Point sampled from X using q(y)
10:
dy
d(y, Ci 1 )2
d q(x)
11:
if dyx q(y) > Unif(0, 1) then x
y, dx
dy
12:
Ci
Ci 1 [ {x}
13: return Ck
Proposal distribution. We argue that the choice of the proposal distribution is critical. Intuitively,
the uniform distribution can be a very bad choice if, in any iteration, the true D2 -sampling distribution
is ?highly? nonuniform. We suggest the following proposal distribution: We first sample a center
c1 2 X uniformly at random and define for all x 2 X the nonuniform proposal
q(x | c1 ) =
1
d(x, c1 )2
1 1
P
+
.
0
2
2 x0 2X d(x , c1 )
2 |X |
|{z}
|
{z
}
(A)
(4)
(B)
The term (A) is the true D -sampling distribution with regards to the first center c1 . For any data
set, it ensures that we start with the best possible proposal distribution in the second iteration. We
will show that this proposal is sufficient even for later iterations, rendering any assumptions on ?
obsolete. The term (B) regularizes the proposal distribution and ensures that the mixing time of
K - MC 2 is always matched up to a factor of two.
2
Algorithm. Algorithm 1 details the proposed fast seeding algorithm A SSUMPTION - FREE K - MC2 . In
the preprocessing step, it first samples an initial center c1 uniformly at random and then computes the
proposal distribution q(? | c1 ). In the main loop, it then uses independent Markov chains of length m
to sample centers in each of the k 1 iterations. The complexity of the main loop is O mk 2 d .
The preprocessing step of A SSUMPTION - FREE K - MC2 requires a single pass through the data to
compute the proposal q(? | c1 ). There are several reasons why this additional complexity of O(nd)
is not an issue in practice: (1) The preprocessing step only requires a single pass through the data
compared to k passes for the seeding of k-means++. (2) It is easily parallelized. (3) Given random
access to the data, the proposal distribution can be calculated online when saving or copying the data.
(4) As we will see in Section 4, the effort spent in the preprocessing step pays off: It often allows
for shorter Markov chains in the main loop. (5) Computing ?(X ) to verify the first assumption of
K - MC 2 is already more expensive than the preprocessing step of A SSUMPTION - FREE K - MC 2 .
Theorem 1. Let ? 2 (0, 1) and k 2 N. Let X be any set of n points in Rd and C be the output of
Algorithm 1 with m = 1 + 8? log 4k
? . Then, it holds that
E [ C (X )] ? 8(log2 k + 2) kOP T (X ) + ? Var(X ).
The computational complexity of the preprocessing step is O(nd) and the computational complexity
of the main loop is O 1? k 2 d log k? .
This result shows that A SSUMPTION - FREE K - MC2 produces provably good clusterings for arbitrary
data sets without assumptions. The guarantee consists of two terms: The first term, i.e., 8(log2 k +
2) kOP T (X ), is the theoretical guarantee of k-means++. The second term, ? Var(X ), quantifies the
potential additional error due to the approximation. The variance is a natural notion as the mean is
the optimal quantizer for k = 1. Intuitively, the second term may be interpreted as a scale-invariant
and additive approximation error.
4
Theorem 1 directly characterizes the tradeoff between improving the solution quality and the resulting
increase in computational complexity. As m is increased, the solution quality converges to the
theoretical guarantee of k-means++. At the same time, even for smaller chain lengths m, we obtain
a provable bound on the solution quality. In contrast, the guarantee of K - MC2 on the solution quality
only holds for a specific choice of m.
For completeness, A SSUMPTION - FREE K - MC2 may also be analyzed under the assumptions made
in Bachem et al. (2016). While for K - MC2 the required chain length m is linear in ?(X ),
A SSUMPTION - FREE K - MC2 does not require this assumption. In fact, we will see in Section 4 that
this lack of dependence of ?(X ) leads to a better empirical performance. If we assume (X ) 2 O(k),
we obtain the following result similar to the one of K - MC2 (albeit with a shorter chain length m).
Corollary 1. Let k 2 N and X be a set of n points in Rd satisfying (X ) 2 O(k). Let C be the
output of Algorithm 1 with m = ?(k log k). Then it holds that
E[
C (X )]
? 8(log2 k + 3)
k
OP T (X ).
The computational complexity of the preprocessing is O(nd) and the computational complexity of the
main loop is O k 3 d log k .
3.1
Proof sketch for Theorem 1
In this subsection, we provide a sketch of the proof of Theorem 1 and defer the full proof to
Section A of the supplementary materials. Intuitively, we first bound how well a single Markov chain
approximates one iteration of exact D2 -sampling. Then, we analyze how the approximation error
accumulates across iterations and provide a bound on the expected solution quality.
For the first step, consider any set C ? X of previously sampled centers. Let c1 2 C denote the
first sampled center that was used to construct the proposal distribution q(x | c1 ) in (4). In a single
iteration, we would ideally sample a new center x 2 X using D2 -sampling, i.e., from p(x | C) as
defined in (1). Instead, Algorithm 1 constructs a Markov chain to sample a new center x 2 X as the
next cluster center. We denote by p?cm1 (x | C) the implied probability of sampling a point x 2 X using
this Markov chain of length m.
The following result shows that in any iteration either C is ?1 -competitive compared to c1 or the
Markov chain approximates D2 -sampling well in terms of total variation distance1 .
Lemma 1. Let ?1 , ?2 2 (0, 1) and c1 2 X . Consider any set C ? X with c1 2 C. For m
1 + ?21 log ?12 , at least one of the following holds:
(i)
(ii)
C (X )
< ?1
kp(? | C)
c1 (X ),
p?cm1 (?
or
| C)kTV ? ?2 .
In the second step, we bound the expected solution quality of Algorithm 1 based on Lemma 1. While
the full proof requires careful propagation of errors across iterations and a corresponding inductive
argument, the intuition is based on distinguishing between two possible cases of sampled solutions.
First, consider the realizations of the solution C that are ?1 -competitive compared to c1 . By definition,
C (X ) < ?1 c1 (X ). Furthermore, the expected solution quality of these realizations can be bounded
by 2?1 Var(X ) since c1 is chosen uniformly at random and hence in expectation c1 (X ) ? 2 Var(X ).
Second, consider the realizations that are not ?1 -competitive compared to c1 . Since the quantization
error is non-increasing in sampled centers, Lemma 1 implies that all k 1 Markov chains result in a
good approximation of the corresponding D2 -sampling. Intuitively, this implies that the approximation error in terms of total variation distance across all k 1 iterations is at most ?2 (k 1). Informally,
the expected solution quality is thus bounded with probability 1 ?2 (k 1) by the expected quality
of k-means++ and with probability ?2 (k 1) by c1 (X ).
Theorem 1 can then be proven by setting ?1 = ?/4 and ?2 = ?/4k and choosing m sufficiently large.
1
Let ? be a finite sample space on which two probability
distributions p and q are defined. The total variation
P
distance kp qkTV between p and q is given by 12 x2? |p(x) q(x)|.
5
Table 1: Data sets used in experimental evaluation
N
D
K
EVAL
?(X )
80,000
145,751
488,565
515,345
5,000,000
45,811,883
17
74
8
90
18
5
200
200
200
2,000
2,000
2,000
T
T
T
H
H
H
546
1,268
69
526
201
2
DATA SET
CSN ( EARTHQUAKES )
KDD ( PROTEIN HOMOLOGY )
RNA ( RNA SEQUENCES )
SONG ( MUSIC SONGS )
SUSY ( SUPERSYM . PARTICLES )
WEB ( WEB USERS )
Table 2: Relative error of A SSUMPTION - FREE K - MC2 and K - MC2 in relation to k-means++.
K - MEANS ++
RANDOM
2
K - MC
2
K - MC
2
K - MC
(m = 20)
(m = 100)
(m = 200)
2
AFK - MC
2
AFK - MC
2
AFK - MC
3.2
(m = 20)
(m = 100)
(m = 200)
CSN
KDD
RNA
SONG
SUSY
WEB
0.00%
399.54%
0.00%
314.78%
0.00%
915.46%
0.00%
9.67%
0.00%
4.30%
0.00%
107.57%
65.34%
14.81%
5.97%
31.91%
3.39%
0.65%
32.51%
9.84%
5.48%
0.41%
0.04%
0.02%
-0.03%
-0.08%
-0.04%
0.86%
-0.01%
0.09%
1.45%
0.25%
0.24%
-0.12%
-0.11%
-0.03%
8.31%
0.81%
-0.29%
0.01%
-0.02%
0.04%
0.00%
-0.06%
-0.05%
1.32%
0.06%
-0.16%
Extension to other clustering problems
While we only consider k-Means clustering and the Euclidean distance in this paper, the results are
more general. They can be directly applied, by transforming the data, to any metric space for which
there exists a global isometry on Euclidean spaces. Examples would be the Mahalanobis distance and
Generalized Symmetrized Bregman divergences (Acharyya et al., 2013).
The results also apply to arbitrary distance measures (albeit with different constants) as D2 -sampling
can be generalized to arbitrary distance measures (Arthur & Vassilvitskii, 2007). However, Var(X )
needs to be replaced by 1OP T (X ) in Theorem 1 since the mean may not be the optimal quantizer (for
k = 1) for a different distance metric. The proposed algorithm can further be extended to different
potential functions of the form k ? kl and used to approximate the corresponding Dl -sampling (Arthur
& Vassilvitskii, 2007), again with different constants. Similarly, the results also apply to bregman++
(Ackermann & Bl?mer, 2010) which provides provably competitive solutions for clustering with a
broad class of Bregman divergences (including the KL-divergence and Itakura-Saito distance).
4
Experimental results
In this section2 , we empirically validate our theoretical results and compare the proposed algorithm
A SSUMPTION - FREE K - MC2 (AFK - MC2 ) to three alternative seeding strategies: (1) RANDOM, a
?naive? baseline that samples k centers from X uniformly at random, (2) the full seeding step of
k-means++, and (3) K - MC2 . For both A SSUMPTION - FREE K - MC2 and K - MC2 , we consider the
different chain lengths m 2 {1, 2, 5, 10, 20, 50, 100, 150, 200}.
Table 1 shows the six data sets used in the experiments with their corresponding values for k. We
choose an experimental setup similar to Bachem et al. (2016): For half of the data sets, we both train
the algorithm and evaluate the corresponding solution on the full data set (denoted by T in the EVAL
column of Table 1). This corresponds to the classical k-Means setting. In practice, however, one is
often also interested in the generalization error. For the other half of the data sets, we retain 250,000
data points as the holdout set for the evaluation (denoted by H in the EVAL column of Table 1).
For all methods, we record the solution quality (either on the full data set or the holdout set) and measure the number of distance evaluations needed to run the algorithm. For A SSUMPTION - FREE K - MC2
this includes both the preprocessing and the main loop. We run every algorithm 200 times with
different random seeds and average the results. We further compute and display 95% confidence
intervals for the solution quality.
2
An implementation of A SSUMPTION - FREE K - MC2 has been released at http://olivierbachem.ch.
6
Training error
CSN
105
5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
8
k-means++
random
6
afk-mc2
k-mc2
7
k-means++
random
5
4
3
2
1
100
101
102
103
100
101
Chain length m
7.1
6.9
k-means++
random
100
101
5.20
5.15
6.8
102
103
Chain length m
afk-mc2
k-mc2
2.2
k-means++
random
1.8
WEB
102
2.4
5.25
k-means++
random
7.0
103
afk-mc2
k-mc2
SUSY
105
5.30
afk-mc2
k-mc2
7.2
102
RNA
107
9
8
7
6
5
4
3
2
1
0
Chain length m
SONG
1011
7.3
Holdout error
KDD
1011
9
afk-mc2
k-mc2
afk-mc2
k-mc2
2.0
k-means++
random
1.6
1.4
5.10
6.7
1.2
5.05
6.6
6.5
1.0
5.00
100
101
102
103
0.8
100
101
Chain length m
102
103
100
101
Chain length m
102
103
Chain length m
Training error
Figure 1: Quantization error in relation to the chain length m for A SSUMPTION - FREE K - MC2 and
K - MC 2 as well as the quantization error for k-means++ and RANDOM (with no dependence on m).
A SSUMPTION - FREE K - MC2 substantially outperforms K - MC2 except on WEB. Results are averaged
across 200 iterations and shaded areas denote 95% confidence intervals.
CSN
105
5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
7
k-mc2
6
107
108
4
4
3
3
2
1011
0
k-mc2
7.0
106
107
108
104
6.9
102
2.4
5.25
k-means++
afk-mc2
5.20
k-mc2
106
107
108
WEB
k-means++
afk-mc2
2.2
2.0
k-mc2
1.8
5.15
6.8
105
Distance evaluations
SUSY
105
5.30
k-means++
afk-mc2
7.1
105
Distance evaluations
SONG
7.2
k-mc2
1
104
Distance evaluations
7.3
6
5
1
106
k-means++
afk-mc2
7
5
2
105
RNA
107
8
k-means++
afk-mc2
8
k-mc2
104
Holdout error
KDD
1011
9
k-means++
afk-mc2
1.6
1.4
5.10
6.7
1.2
5.05
6.6
6.5
1.0
5.00
106
107
108
Distance evaluations
109
0.8
106
107
108
109
Distance evaluations
1010
106
107
108
109
1010
1011
Distance evaluations
Figure 2: Quantization error in relation to the number of distance evaluations for
A SSUMPTION - FREE K - MC2 , K - MC2 and k-means++. A SSUMPTION - FREE K - MC2 provides a
speedup of up to several orders of magnitude compared to k-means++. Results are averaged across
200 iterations and shaded areas denote 95% confidence intervals.
7
Table 3: Relative speedup (in terms of distance evaluations) in relation to k-means++.
CSN
K - MEANS ++
2
K - MC
2
K - MC
2
K - MC
(m = 20)
(m = 100)
(m = 200)
2
AFK - MC
2
AFK - MC
2
AFK - MC
(m = 20)
(m = 100)
(m = 200)
KDD
RNA
SONG
SUSY
WEB
1.0?
1.0?
1.0?
1.0?
1.0?
1.0?
40.0?
8.0?
4.0?
72.9?
14.6?
7.3?
244.3?
48.9?
24.4?
13.3?
2.7?
1.3?
237.5?
47.5?
23.8?
2278.1?
455.6?
227.8?
33.3?
7.7?
3.9?
53.3?
13.6?
7.0?
109.7?
39.2?
21.8?
13.2?
2.6?
1.3?
212.3?
46.4?
23.5?
1064.7?
371.0?
204.5?
Discussion. Figure 1 shows the expected quantization error for the two baselines, RANDOM and
k-means++, and for the MCMC methods with different chain lengths m. As expected, the seeding
step of k-means++ strongly outperforms RANDOM on all data sets. As the chain length m increases,
the quality of solutions produced by both A SSUMPTION - FREE K - MC2 and K - MC2 quickly converges
to that of k-means++ seeding.
On all data sets except WEB, A SSUMPTION - FREE K - MC2 starts with a lower initial error due to the
improved proposal distribution and outperforms K - MC2 for any given chain length m. For WEB,
both algorithms exhibit approximately the same performance. This is expected as ?(X ) of WEB is
very low (see Table 1). Hence, there is only a minor difference between the nonuniform proposal of
A SSUMPTION - FREE K - MC2 and the uniform proposal of K - MC2 . In fact, one of the key advantages
of A SSUMPTION - FREE K - MC2 is that its proposal adapts to the data set at hand.
As discussed in Section 3, A SSUMPTION - FREE K - MC2 requires an additional preprocessing step
to compute the nonuniform proposal. Figure 2 shows the expected solution quality in relation
to the total computational complexity in terms of number of distance evaluations. Both K - MC2
and A SSUMPTION - FREE K - MC2 generate solutions that are competitive with those produced by
the seeding step of k-means++. At the same time, they do this at a fraction of the computational
cost. Despite the preprocessing, A SSUMPTION - FREE K - MC2 clearly outperforms K - MC2 on the data
sets with large values for ?(X ) (CSN, KDD and SONG). The additional effort of computing the
nonuniform proposal is compensated by a substantially lower expected quantization error for a given
chain size. For the other data sets, A SSUMPTION - FREE K - MC2 is initially disadvantaged by the cost
of computing the proposal distribution. However, as m increases and more time is spent computing
the Markov chains, it either outperforms K - MC2 (RNA and SUSY) or matches its performance (WEB).
Table 3 details the practical significance of the proposed algorithm. The results indicate that in
practice it is sufficient to run A SSUMPTION - FREE K - MC2 with a chain length independent of n.
Even with a small chain length, A SSUMPTION - FREE K - MC2 produces competitive clusterings at
a fraction of the computational cost of the seeding step of k-means++. For example on CSN,
A SSUMPTION - FREE K - MC2 with m = 20 achieves a relative error of 1.45% and a speedup of 33.3?.
At the same time, K - MC2 would have exhibited a substantial relative error of 65.34% while only
obtaining a slightly better speedup of 40.0?.
5
Conclusion
In this paper, we propose A SSUMPTION - FREE K - MC2 , a simple and fast seeding algorithm for
k-Means. In contrast to the previously introduced algorithm K - MC2 , it produces provably good
clusterings even without assumptions on the data. As a key advantage, A SSUMPTION - FREE K - MC2
allows to provably trade off solution quality for a decreased computational effort. Extensive experiments illustrate the practical significance of the proposed algorithm: It obtains competitive clusterings
at a fraction of the cost of k-means++ seeding and it outperforms or matches its main competitor
K - MC 2 on all considered data sets.
Acknowledgments
This research was partially supported by ERC StG 307036, a Google Ph.D. Fellowship and an IBM
Ph.D. Fellowship.
8
References
Acharyya, Sreangsu, Banerjee, Arindam, and Boley, Daniel. Bregman divergences and triangle
inequality. In SIAM International Conference on Data Mining (SDM), pp. 476?484, 2013.
Ackermann, Marcel R and Bl?mer, Johannes. Bregman clustering for separable instances. In SWAT,
pp. 212?223. Springer, 2010.
Aggarwal, Ankit, Deshpande, Amit, and Kannan, Ravi. Adaptive sampling for k-means clustering.
In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques,
pp. 15?28. Springer, 2009.
Ailon, Nir, Jaiswal, Ragesh, and Monteleoni, Claire. Streaming k-means approximation. In Neural
Information Processing Systems (NIPS), pp. 10?18, 2009.
Arthur, David and Vassilvitskii, Sergei. k-means++: The advantages of careful seeding. In Symposium
on Discrete Algorithms (SODA), pp. 1027?1035. Society for Industrial and Applied Mathematics,
2007.
Bachem, Olivier, Lucic, Mario, Hassani, S. Hamed, and Krause, Andreas. Approximate k-means++
in sublinear time. In Conference on Artificial Intelligence (AAAI), February 2016.
Bahmani, Bahman, Moseley, Benjamin, Vattani, Andrea, Kumar, Ravi, and Vassilvitskii, Sergei.
Scalable k-means++. Very Large Data Bases (VLDB), 5(7):622?633, 2012.
Bottou, Leon and Bengio, Yoshua. Convergence properties of the k-means algorithms. In Neural
Information Processing Systems (NIPS), pp. 585?592, 1994.
Brunsch, Tobias and R?glin, Heiko. A bad instance for k-means++. In Theory and Applications of
Models of Computation, pp. 344?352. Springer, 2011.
Cai, Haiyan. Exact bound for the convergence of Metropolis chains. Stochastic Analysis and
Applications, 18(1):63?71, 2000.
Celebi, M Emre, Kingravi, Hassan A, and Vela, Patricio A. A comparative study of efficient
initialization methods for the k-means clustering algorithm. Expert Systems with Applications, 40
(1):200?210, 2013.
Hastings, W Keith. Monte Carlo sampling methods using Markov chains and their applications.
Biometrika, 57(1):97?109, 1970.
Jaiswal, Ragesh, Kumar, Amit, and Sen, Sandeep. A simple D2 -sampling based PTAS for k-means
and other clustering problems. Algorithmica, 70(1):22?46, 2014.
Jaiswal, Ragesh, Kumar, Mehul, and Yadav, Pulkit. Improved analysis of D2 -sampling based PTAS
for k-means and other clustering problems. Information Processing Letters, 115(2):100?103, 2015.
Lloyd, Stuart. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):
129?137, 1982.
Ostrovsky, Rafail, Rabani, Yuval, Schulman, Leonard J, and Swamy, Chaitanya. The effectiveness of
Lloyd-type methods for the k-means problem. In Foundations of Computer Science (FOCS), pp.
165?176. IEEE, 2006.
Sculley, D. Web-scale k-means clustering. In World Wide Web (WWW), pp. 1177?1178. ACM, 2010.
Zhao, Weizhong, Ma, Huifang, and He, Qing. Parallel k-means clustering based on MapReduce. In
Cloud Computing, pp. 674?679. Springer, 2009.
9
| 6478 |@word stronger:1 nd:3 unif:1 vldb:1 d2:15 bahmani:2 initial:6 kingravi:1 selecting:1 ktv:1 daniel:1 outperforms:6 csn:7 yet:2 dx:2 sergei:2 additive:1 subsequent:1 kdd:6 seeding:29 stationary:1 half:2 selected:1 obsolete:2 intelligence:1 record:1 hypersphere:1 quantizer:2 completeness:1 provides:2 firstly:1 symposium:1 focs:1 consists:2 prove:2 theoretically:1 pairwise:1 x0:4 deteriorate:1 expected:12 andrea:1 increasing:2 spain:1 provided:1 matched:1 formalizing:1 bounded:2 interpreted:1 substantially:3 finding:1 impractical:1 guarantee:13 every:2 biometrika:1 ostrovsky:2 highquality:1 consequence:2 despite:1 accumulates:1 parallelize:1 mc2:78 approximately:3 initialization:1 studied:1 suggests:1 shaded:2 ease:1 mer:2 averaged:2 practical:2 acknowledgment:1 earthquake:1 yj:9 practice:4 saito:1 area:2 empirical:2 eth:4 reject:1 glin:2 confidence:3 suggest:1 protein:1 www:1 center:25 compensated:1 independently:1 survey:1 focused:1 classic:1 notion:1 variation:3 updated:1 massive:2 exact:3 olivier:3 user:1 ck22:1 us:2 distinguishing:1 element:1 expensive:3 satisfying:2 observed:1 cloud:1 verifying:1 yadav:1 ensures:2 jaiswal:4 trade:2 boley:1 substantial:2 intuition:1 transforming:1 benjamin:1 complexity:14 nkd:4 ideally:2 tobias:1 reviewing:1 upon:1 triangle:1 easily:2 various:1 train:1 fast:10 shortcoming:1 monte:3 kp:2 artificial:1 choosing:1 refined:1 whose:1 widely:1 solve:1 supplementary:1 otherwise:1 ankit:1 online:2 advantage:5 sequence:1 sdm:1 cai:1 sen:1 propose:7 loop:7 realization:3 mixing:2 adapts:1 validate:2 convergence:2 cluster:7 produce:7 generating:3 comparative:1 converges:3 spent:2 illustrate:1 minor:1 op:5 keith:1 strong:1 marcel:1 implies:3 indicate:1 drawback:3 stochastic:2 hassan:1 material:1 require:4 generalization:1 randomization:1 mehul:1 secondly:1 extension:1 hold:8 sufficiently:2 considered:1 seed:1 achieves:1 released:1 heavytailed:1 combinatorial:1 vela:1 clearly:1 always:1 rna:7 aim:1 heiko:1 ck:1 corollary:1 greatly:1 contrast:3 afk:20 stg:1 baseline:2 industrial:1 dependent:1 streaming:1 accept:1 initially:1 relation:5 interested:1 provably:7 issue:1 denoted:2 retaining:1 art:1 distance1:1 construct:3 saving:1 sampling:24 bachem:10 broad:1 stuart:1 minimized:1 np:1 yoshua:1 primarily:1 divergence:5 qing:1 replaced:2 algorithmica:1 acceptance:1 highly:1 mining:1 eval:3 evaluation:12 analyzed:1 chain:37 bregman:5 arthur:7 shorter:2 orthogonal:1 pulkit:1 euclidean:2 chaitanya:1 theoretical:12 mk:2 instance:3 increased:1 column:2 retains:1 cost:5 uniform:4 characterize:2 density:1 international:1 siam:1 retain:2 off:3 quickly:1 squared:1 again:1 aaai:1 choose:1 expert:1 zhao:2 vattani:1 return:1 potential:2 lloyd:8 includes:1 depends:1 later:1 mario:2 analyze:2 characterizes:1 competitive:11 start:4 parallel:1 defer:1 contribution:2 square:1 variance:2 who:1 efficiently:2 ackermann:2 critically:1 produced:3 mc:17 carlo:3 hamed:3 monteleoni:1 whenever:1 definition:1 competitor:1 pp:10 deshpande:1 proof:4 sampled:8 holdout:4 subsection:1 hassani:2 improved:2 strongly:1 furthermore:3 hand:2 hastings:4 sketch:2 web:13 replacing:1 banerjee:1 lack:1 cm1:2 propagation:1 google:1 quality:23 name:1 building:1 validity:1 verify:1 true:2 homology:1 inductive:1 hence:3 round:2 mahalanobis:1 generalized:2 theoretic:1 demonstrate:1 lucic:3 novel:1 recently:1 arindam:1 empirically:2 extend:2 tail:1 approximates:2 he:1 discussed:1 rd:5 mathematics:1 similarly:1 erc:1 particle:1 access:1 base:1 isometry:1 recent:1 inf:3 moderate:1 susy:6 inequality:1 arbitrarily:1 additional:5 ptas:2 impose:1 parallelized:2 ii:1 full:8 reduces:1 aggarwal:2 match:2 equally:1 hasting:1 scalable:3 variant:1 expectation:4 metric:3 iteration:16 c1:25 proposal:21 background:1 fellowship:2 krause:2 x2x:3 decreased:2 interval:3 exhibited:1 pass:5 effectiveness:2 bengio:2 rendering:1 variety:1 xj:10 andreas:2 tradeoff:3 c2c:1 vassilvitskii:8 six:1 sandeep:1 ragesh:3 effort:3 song:7 render:1 section2:1 detailed:1 informally:1 johannes:1 locally:1 extensively:1 ph:2 http:1 generate:1 oversampling:1 discrete:1 key:6 drawn:1 acharyya:2 ravi:2 fraction:3 run:3 letter:1 soda:1 dy:2 bound:7 pay:1 followed:1 display:1 refine:1 disadvantaged:1 x2:1 speed:1 argument:1 min:3 rabani:1 kumar:3 leon:1 separable:1 speedup:4 department:4 ailon:2 smaller:1 across:5 slightly:1 metropolis:4 making:1 intuitively:4 invariant:1 computationally:1 resource:1 zurich:4 previously:3 turn:2 needed:1 end:1 available:1 apply:2 batch:1 alternative:2 symmetrized:1 swamy:1 clustering:22 running:1 log2:7 calculating:1 music:1 build:2 amit:2 february:1 classical:1 society:1 bl:2 implied:1 objective:1 already:3 added:2 quantity:1 strategy:3 dependence:2 exhibit:2 distance:23 argue:1 reason:1 provable:1 kannan:1 length:23 copying:1 mini:1 setup:1 potentially:1 stated:1 implementation:1 seedings:2 datasets:1 markov:15 finite:3 regularizes:1 extended:1 nonuniform:5 arbitrary:5 introduced:2 david:1 namely:1 required:2 kl:2 extensive:2 barcelona:1 nip:3 address:2 celebi:2 xm:1 max:1 including:1 critical:2 natural:2 scheme:1 imply:1 naive:2 speeding:1 nir:1 prior:1 literature:1 mapreduce:2 checking:1 schulman:1 relative:4 emre:1 highlight:1 bahman:1 sublinear:2 proportional:1 proven:1 var:7 foundation:1 krausea:1 sufficient:2 ibm:1 claire:1 supported:1 free:30 wide:1 regard:2 calculated:1 world:4 valid:1 computes:1 rafail:1 author:5 made:1 adaptive:2 preprocessing:11 transaction:1 approximate:5 obtains:1 global:1 sequentially:1 iterative:1 quantifies:1 why:1 table:8 inherently:2 itakura:1 obtaining:2 improving:2 bottou:2 investigated:1 significance:2 main:8 complementary:1 x1:1 ssumption:29 exponential:1 candidate:2 kop:3 theorem:8 bad:4 specific:2 favourable:1 dl:1 exists:1 quantization:9 albeit:2 sequential:3 ci:4 magnitude:2 pcm:1 partially:1 springer:4 ch:5 corresponds:1 acm:1 ma:1 goal:1 careful:2 sculley:2 leonard:1 feasible:1 hard:2 except:2 uniformly:7 yuval:1 lemma:3 called:1 total:6 pas:3 swat:1 experimental:3 moseley:1 ethz:4 evaluate:1 mcmc:6 |
6,056 | 6,479 | Optimal spectral transportation with application to
music transcription
R?mi Flamary
Universit? C?te d?Azur, CNRS, OCA
remi.flamary@unice.fr
Nicolas Courty
Universit? de Bretagne Sud, CNRS, IRISA
courty@univ-ubs.fr
C?dric F?votte
CNRS, IRIT, Toulouse
cedric.fevotte@irit.fr
Valentin Emiya
Aix-Marseille Universit?, CNRS, LIF
valentin.emiya@lif.univ-mrs.fr
Abstract
Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art
music transcription systems decompose the spectrogram of the input signal onto
a dictionary of representative note spectra. The typical measures of fit used to
quantify the adequacy of the decomposition compare the data and template entries
frequency-wise. As such, small displacements of energy from a frequency bin
to another as well as variations of timbre can disproportionally harm the fit. We
address these issues by means of optimal transportation and propose a new measure
of fit that treats the frequency distributions of energy holistically as opposed to
frequency-wise. Building on the harmonic nature of sound, the new measure is
invariant to shifts of energy to harmonically-related frequencies, as well as to
small and local displacements of energy. Equipped with this new measure of fit,
the dictionary of note templates can be considerably simplified to a set of Dirac
vectors located at the target fundamental frequencies (musical pitch values). This in
turns gives ground to a very fast and simple decomposition algorithm that achieves
state-of-the-art performance on real musical data.
1
Context
Many of nowadays spectral unmixing techniques rely on non-negative matrix decompositions. This
concerns for example hyperspectral remote sensing (with applications in Earth observation, astronomy,
chemistry, etc.) or audio signal processing. The spectral sample vn (the spectrum of light observed at
a given pixel n, or the audio spectrum in a given time frame n) is decomposed onto a dictionary W of
elementary spectral templates, characteristic of pure materials or sound objects, such that vn ? Whn .
The composition of sample n can be inferred from the non-negative expansion coefficients hn . This
paradigm has led to state-of-the-art results for various tasks (recognition, classification, denoising,
separation) in the aforementioned areas, and in particular in music transcription, the central application
of this paper.
In state-of-the-art music transcription systems, the spectrogram V (with columns vn ) of a musical
signal is decomposed onto a dictionary of pure notes (in so-called multi-pitch estimation) or chords. V
typically consists of (power-)magnitude values of a regular short-time Fourier transform (Smaragdis
and Brown, 2003). It may also consists of an audio-specific spectral transform such as the Melfrequency transform, like in (Vincent et al., 2010), or the Q-constant based transform, like in (Oudre
et al., 2011). The success of the transcription system depends of course on the adequacy of the
time-frequency transform & the dictionary to represent the data V. In particular, the matrix W must
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
be able to accurately represent a diversity of real notes. It may be trained with individual notes using
annotated data (Boulanger-Lewandowski et al., 2012), have a parametric form (Rigaud et al., 2013)
or be learnt from the data itself using a harmonic subspace constraint (Vincent et al., 2010).
One important challenge of such methods lies in their ability to cope with the variability of real notes.
A simplistic dictionary model will assume that one note characterised by fundamental frequency ?0
(e.g., ?0 = 440 Hz for note A4 ) will be represented by a spectral template with non-zero coefficients
placed at ?0 and at its multiples (the harmonic frequencies). In reality, many instruments, such as the
piano, produce musical notes with either slight frequency misalignments (so-called inharmonicities)
with respect to the theoretical values of the fundamental and harmonic frequencies, or amplitude
variations at the harmonic frequencies with respect to recording conditions or played instrument
(variations of timbre). Handling these variabilities by increasing the dictionary with more templates
is typically unrealistic and adaptive dictionaries have been considered in (Vincent et al., 2010; Rigaud
et al., 2013). In these papers, the spectral shape of the columns of W is adjusted to the data at hand,
using specific time-invariant semi-parametric models. However, the note realisations may vary in time,
something which is not handled by these approaches. This work presents a new spectral unmixing
method based on optimal transportation (OT) that is fully flexible and remedies the latter difficulties.
Note that Typke et al. (2004) have previously applied OT to notated music (e.g., score sheets) for
search-by-query in databases while we address here music transcription from audio spectral data.
2
A relevant baseline: PLCA
Before presenting our contributions, we start by introducing the PLCA method of Smaragdis et al.
(2006) which is heavily used in audio signal processing. It is based on the Probabilistic Latent
Semantic Analysis (PLSA) of Hofmann (2001) (used in text retrieval) and is a particular form of nonnegative matrix factorisation (NMF). Simplifying a bit, in PLCA the columns of V are normalised
to sum to one. Each vector vn is then treated as a discrete probability distribution of ?frequency
quanta? and is approximated as V ? WH. The matrices W and H are of size M ? K and K ? N ,
respectively, and their columns are constrained to sum to one. As a result, the columns of the
? = WH sum to one as well and each distribution vector vn is as such approximated
approximate V
? Under the assumption that W is known, the approximation
? n in V.
by the counterpart distribution v
is found by solving the optimisation problem defined by
min DKL (V|WH) s.t ?n, khn k1 = 1,
H?0
(1)
P
where DKL (v|?
v) = i vi log(vi /?
vi ) is the KL divergence between discrete distributions, and by
? = P DKL (vn |?
extension DKL (V|V)
vn ).
n
An important characteristic of the KL divergence is its separability with respect to the entries of its
arguments. It operates a frequency-wise comparison in the sense that, at every frame n, the spectral
coefficient vin at frequency i is compared to its counterpart v?in , and the results of the comparisons
are summed over i. In particular, a small displacement in the frequency support of one observation
may disproportionally harm the divergence value. For example, if vn is a pure note with fundamental
frequency ?0 , a small inharmonicity that shifts energy from ?0 to an adjacent frequency bin will
unreasonably increase the divergence value, when vn is compared with a purely harmonic spectral
template with fundamental frequency ?0 . As explained in Section 1 such local displacements of
frequency energy are very common when dealing with real data. A measure of fit invariant to small
perturbations of the frequency support would be desirable in such a setting, and this is precisely what
OT can bring.
3
Elements of optimal transportation
Given a discrete probability distribution v (a non-negative real-valued column vector of dimension M
? (with same properties), OT computes a transportation
and summing to one) and a target distribution v
PM
PM
def
?M
matrix T belonging to the set ? = {T ? RM
|?i, j = 1, . . . , N, j=1 tij = vi , i=1 tij =
+
v?j }. T establishes a bi-partite graph connecting the two distributions. In simple words, an amount
?.
(or, in typical OT parlance, a ?mass?) of every coefficient of vector v is transported to an entry of v
? must equal v?j . The value of tij is the amount
The sum of transported amounts to the j th entry of v
2
? . In our particular setting, the vector v is a
transported from the ith entry of v to the j th entry of v
distribution of spectral energies v1 , . . . , vM at sampling frequencies f1 , . . . , fM .
Without additional constraints, the problem of finding a non-negative matrix T ? ? has an infinite
number of solutions. As such, OT takes into account the cost of transporting an amount from the ith
? , denoted cij (a non-negative real-valued number). Endorsed with this
entry of v to the j th entry of v
cost function, OT involves solving the optimisation problem defined by
X
? , C) =
min J(T|v, v
cij tij s.t T ? ?,
(2)
ij
T
where C is the non-negative square matrix of size M with elements cij . Eq. (2) defines a convex
? , C) at its minimum is denoted DC (v|?
linear program. The value of the function J(T|v, v
v). When
C is a symmetric matrix such that cij = kfi ? fj kpp , where we recall that fi and fj are the frequencies
in Hertz indexed by i and j, DC (v|?
v) defines a metric (i.e., a symmetric divergence that satisfies
the triangle inequality) coined Wasserstein distance or earth mover?s distance (Rubner et al., 1998;
Villani, 2009). In other cases, in particular when the matrix C is not even symmetric like in the next
section, DC (v|?
v) is not a metric in general, but is still a valid measure of fit. For generality, we will
refer to it as the ?OT divergence?.
By construction, the OT divergence can explicitly embed a form of invariance to displacements of
support, as defined by the transportation cost matrix C. For example, in the spectral decomposition
setting, the matrix with entries of the form cij = (fi ? fj )2 will increasingly penalise frequency
displacements as the distance between frequency bins increases. This precisely remedies the limitation
of the separable KL divergence presented in Section 2. As such, the next section addresses variants
of spectral unmixing based on the Wasserstein distance.
4
Optimal spectral transportation (OST)
Unmixing with OT. In light of the above discussion, a direct solution to the sensibility of PLCA to
small frequency displacements consists in replacing the KL divergence with the OT divergence. This
amounts to solving the optimisation problem given by
min DC (V|WH) s.t ?n, khn k1 = 1,
H?0
(3)
? = P DC (vn |?
where DC (V|V)
vn ), W is fixed and populated with pure note spectra and C
n
penalises large displacements of frequency support. This approach is a particular case of NMF with
the Wasserstein distance, which has been considered in a face recognition setting by Sandler and
Lindenbaum (2011), with subsequent developments by Zen et al. (2014) and Rolet et al. (2016).
This approach is relevant to our spectral unmixing scenario but as will be discussed in Section 5 is
on the downside computationally intensive. It also requires the columns of W to be set to realistic
note templates, which is still constraining. The next two sections describes a computationally more
friendly approach which additionally removes the difficulty of choosing W appropriately.
Harmonic-invariant transportation cost. In the approach above, the harmonic modelling is
conveyed by the dictionary W (consisting of comb-like pure note spectra) and the invariance to small
frequency displacements is introduced via the matrix C. In this section we propose to model both
harmonicity and local invariance through the transportation cost matrix C. Loosely speaking, we
want to define a class of equivalence between musical spectra, that takes into account their inherent
harmonic nature. As such, we essentially impose that a harmonic frequency (i.e., a close multiple
of its fundamental) can be considered equivalent to its fundamental, the only target of multi-pitch
estimation. As such, we assume that a mass at one frequency can be transported to a divisor frequency
with no cost. In other words, a mass at frequency fi can be transported with no cost to fi /2, fi /3,
fi /4, and so on until sampling resolution. One possible cost matrix that embeds this property is
cij =
min
q=1,...,qmax
(fi ? qfj )2 + ?q6=1 ,
(4)
where qmax is the ceiling of fi /fj and is a small value. The term ?q6=1 favours the discrimination
of octaves. Indeed, it penalises the transportation of a note of fundamental frequency 2?0 or ?0 /2 to
the spectral template with fundamental frequency ?0 , which would be costless without this additive
term. Let us denote by Ch the transportation cost matrix defined by Eq. (4). Fig. 1 compares Ch
3
Quadratic cost C2 (log scale)
Harmonic cost Ch (log scale)
Selected columns of C2
Selected columns of Ch
j = 1 . . . 100
cij
i = 1 . . . 100
i=20
i=25
i=30
i=35
cij
i = 1 . . . 100
i=20
i=25
i=30
i=35
j = 1 . . . 100
j = 1 . . . 100
j = 1 . . . 100
Figure 1: Comparison of transportation cost matrices C2 and Ch (full matrices and selected columns).
One D i rac sp e c tral te mpl ate and thre e data sampl e s
1
?
v
v1
v2
v3
0.5
0
0
10
20
30
40
50
60
70
80
90
Measure of fit
D(v1 |?
v)
D(v2 |?
v)
D(v3 |?
v)
D`2
1.13
1.13
0.91
DKL
72.92
5.42
2.02
DC2
145.00
10.00
1042.67
DCh
134.32
10.00
1.00
? (left) and computed divergences
Figure 2: Three example spectra vn compared to a given template v
(right). The template is a mere Dirac vector placed at a particular frequency ?0 . D`2 denotes the
standard quadratic error kx ? yk22 . By construction of DCh , sample v3 which is harmonically related
to the template returns a very good fit with the latter OT divergence. Note that it does not make sense
to compare output values of different divergences; only the relative comparison of output values of
the same divergence for different input samples is meaningful.
to the more standard quadratic cost C2 defined by cij = (fi ? fj )2 . With the quadratic cost, only
local displacements are permissible. In contrast, the harmonic-invariant cost additionally permits
larger displacements to divisor frequencies, improving robustness to variations of timbre besides to
inharmonicities.
Dictionary of Dirac vectors. Having designed an OT divergence that encodes inherent properties of
musical signals, we still need to choose a dictionary W that will encode the fundamental frequencies
of the notes to identify. Typically, these will consist of the physical frequencies of the 12 notes of the
chromatic scale (from note A to note G, including half-tones), over several octaves. As mentioned
in Section 1, one possible strategy is to populate W with spectral note templates. However, as also
discussed, the performance of the resulting unmixing method will be capped by the representativeness
of the chosen set of templates.
A most welcome consequence of using the OT divergence built on the harmonic-insensitive cost
matrix Ch is that we may use for W a mere set of Dirac vectors placed at the fundamental frequencies
?1 , . . . , ?K of the notes to identify and separate. Indeed, under the proposed setting, a real note
spectra (composed of one fundamental and multiple harmonic frequencies) can be transported with
no cost to its fundamental. Similarly, a spectral sample composed of several notes can be transported
to mixture of Dirac vectors placed at their fundamental frequencies. This simply eliminates the
problem of choosing a representative dictionary! This very appealing property is illustrated in Fig. 2.
Furthermore, the particularly simple structure of the dictionary leads to a very efficient unmixing
algorithm, as explained in the next section. In the following, the unmixing method consisting of the
combined use of the harmonic-invariant cost matrix Ch and of the dictionary of Dirac vectors will be
coined ?optimal spectral transportation? (OST).
At this level, we assume for simplicity that the set of K fundamental frequencies {?1 , . . . , ?K } is
contained in the set of sampled frequencies {f1 , . . . , fM }. This means that wk (the k th column of
W) is zero everywhere except at some entry i such that fi = ?k where wik = 1. This is typically
not the case in practice, where the sampled frequencies are fixed by the sampling rate, of the form
fi = 0.5(i/T )fs , and where the fundamental frequencies ?k are fixed by music theory. Our approach
can actually deal with such a discrepancy and this will be explained later in Section 5.
4
5
Optimisation
OT unmixing with linear programming. We start by describing optimisation for the state-of-theart OT unmixing problem described by Eq. (3) and proposed by Sandler and Lindenbaum (2011).
First, since the objective function is separable with respect to samples, the optimisation problem
decouples with respect to the activation columns hn . Dropping the sample index n and combining
Eqs. (2) and (3), optimisation thus reduces to solving for every sample a problem of the form
X
min
hT, Ci =
tij cij s.t. T1M = v, T> 1M = Wh,
(5)
ij
h?0,T?0
where 1M is a vector of dimension M containing only ones and h?, ?i is the Frobenius inner product.
Vectorising the variables T and h into a single vector of dimension M 2 + K, problem (5) can be
turned into a canonical linear program. Because of the large dimension of the variable (typically in
the order of 105 ), resolution can however be very demanding, as will be shown in experiments.
Optimisation for OST. We now assume that W is a set of Dirac vectors as explained at the end
of Section 4. We also assume that K < M , which is the usual scenario. Indeed, K is typically
? = Wh
in the order of a few tens, while M is in the order of a few hundreds. In such a setting v
contains by design at most K non-zero coefficients, located at the entries such thatPfi = ?k . We
denote this set of frequency indices by S. Hence, for j ?
/ S, we have v?j = 0 and thus i tij = 0, by
the second constraint of Eq. (5). Additionally, by the non-negativity of T this also implies that T has
e this subset of columns, and by C
e the
only K non-zero columns, indexed by j ? S. Denoting by T
corresponding subset of columns of C, problem (5) reduces to
min
e Ci
e
hT,
s.t.
e K = v,
T1
e > 1M = h.
T
(6)
e
h?0,T?0
This is an optimisation problem of significantly reduced dimension (M + 1)K. Even more appealing,
the problem has a simple closed-form solution. Indeed, the variable h has a virtual role in problem (6).
It only appears in the second constraint, which de facto becomes a free constraint. Thus problem (6)
e regardless of h, and h is then simply obtained by summing the
can be solved with respect to T
>
e
columns of T at the solution. Now, the problem
e Ci
e
e K =v
min hT,
s.t. T1
(7)
e
T?0
e and becomes, ?i = 1, . . . , M ,
decouples with respect to the rows t?i of T,
X
X
min
t?ik c?ik s.t.
t?ik = vi .
t?i ?0
k
k
(8)
The solution is simply given by t?iki? = vi for ki? = arg mink {?
cik }, and t?ik = 0 for k 6= ki? .
Introducing the labelling matrix L which is everywhere zero except for indices (i, ki? ) where it is
? = L> v. Thus, under the specific assumption
equal to 1, the solution to OST is trivially given by h
that W is a set of Dirac vectors, the challenging problem (5) has been reduced to an effortless
assignment problem to solve for T and a simple sum to solve for h. Note that the algorithm is
independent of the particular structure of C. In the end, the complexity per frame of OST reduces to
O(M ), which starkly contrasts with the complexity of PLCA, in the order O(KM ) per iteration.
In Section 4, we assumed for simplicity that the set of fundamental frequencies {?k }k was contained
in the set of sampled frequencies {fi }i . As a matter of fact, this assumption can be trivially lifted in
e (of dimensions M ? K)
the proposed setting of OST. Indeed, we may construct the cost matrix C
by replacing the target frequencies fj in Eq. (4) by the theoretical fundamental frequencies ?k .
e to be e
Namely, we may simply set the coefficients of C
cik = minq (fi ? q?k )2 + ?q6=1 , in the
e indicates how each sample v is transported to the Dirac vectors
implementation. Then, the matrix T
placed at fundamental frequencies {?k }k , without the need for the actual Dirac vectors themselves,
which elegantly solves the frequency sampling problem.
OST with entropic regularisation (OSTe ). The procedure described above leads to a winnertakes-all transportation of all of vi to its cost-minimum target entry ki? . We found it useful in
5
practice to relax this hard assignment and distribute energies more evenly by using the entropic
e Ci
e in Eq. (6) with an additional
regularisation of Cuturi (2013). It consists of penalising the fit hT,
P
e
e
?
?
term ?e (T) = ik tik log(tik ), weighted by the hyper-parameter ?e . The negentropic term ?e (T)
e
promotes the transportation of vi to several entries, leading to a smoother estimate of T. As explained
in the supplementary material, one can show that the negentropy-regularised problem is a Bregman
? = L> v where Le is the
projection (Benamou et al., 2015) and has again a closed-form
solution h
e
P
M ? K matrix with coefficients lik = exp(??
cik /?e )/ p exp(??
cip /?e ). Limiting cases ?e = 0
and ?e = ? return the unregularised OST estimate and the maximum-entropy estimate hk = 1/K,
respectively. Because Le becomes a full matrix, the complexity per frame of OSTe becomes O(KM ).
OST with group regularisation (OSTg ). We have explained above that the transportation matrix
T has a strong group structure in the sense that it contains by construction M ? K null columns,
e needs to be considered. Because a small number of the K possible
and that only the subset T
e will additionally have a significant number
notes will be played at every time frame, the matrix T
e
of null columns. This heavily suggests using group-sparse regularisation in the estimation
q of T.
P
e =
As such, we also consider problem (6) penalised by the additional term ?g (T)
ke
tk k1
k
which promotes group-sparsity at column level (Huang et al., 2009). Unlike OST or OSTe , OSTg
does not offer a closed-form solution. Following Courty et al. (2014), a majorisation-minimisation
e can be employed and the details are given in
procedure based on the local linearisation of ?g (T)
the supplementary material. The resulting algorithm consists in iteratively applying unregularised
e (iter) = C
e +R
e (iter) ,
OST, as of Eq. (6), with the iteration-dependent transportation cost matrix C
1
?
(iter)
e (iter) is the M ? K matrix with coefficients re(iter) = 1 ke
where R
k1 2 . Note that the proposed
ik
2 tk
e corresponds to a sparse regularisation of h. This is because hk = ke
group-regularisation of T
tk k1
P ?
e
and thus, ?g (T) = k hk . Finally, note that OSTe and OSTg can be implemented simultaneously,
leading to OSTe+g , by considering the optimisation of the doubly-penalised objective function
e Ci
e + ?e ?e (T)
e + ?g ?g (T),
e addressed in the supplementary material.
hT,
6
Experiments
Toy experiments with simulated data. In this section we illustrate the robustness, the flexibility
and the efficiency of OST on two simulated examples. The top plots of Fig. 3 display a synthetic
dictionary of 8 harmonic spectral templates, referred to as the ?harmonic dictionary?. They have
been generated as Gaussian kernels placed at a fundamental frequency and its multiples, and using
exponential dampening of the amplitudes. As everywhere in the paper, the spectral templates are
normalised to sum to one. Note that the 8th template is the upper octave of the first one. We compare
the unmixing performance of five methods in two different scenarios. The five methods are as follows.
PLCA is the method described in Section 2, where the dictionary W is the harmonic dictionary.
Convergence is stopped when the relative difference of the objective function between two iterations
falls below 10?5 or the number of iterations (per frame) exceeds 1000. OTh is the unmixing method
with the OT divergence, as in the first paragraph of Section 4, using the harmonic transportation cost
matrix Ch and the harmonic dictionary. OST is like OTh , but using a dictionary of Dirac vectors
(placed at the 8 fundamental frequencies characterising the harmonic dictionary). OSTe , OSTg and
OSTe+g are the regularised variants of OST, described at the end of Section 4. The iterative procedure
in the group-regularised variants is run for 10 iterations (per frame).
In the first experimental scenario, reported in Fig. 3 (a), the data sample is generated by mixing the
1st and 4th elements of the harmonic dictionary, but introducing a small shift of the true fundamental
frequencies (with the shift being propagated to the harmonic frequencies). This mimics the effect
of possible inharmonicities or of an ill-tuned instrument. The middle plot of Fig. 3 (a), displays
the generated sample, together with the ?theoretical sample?, i.e., without the frequencies shift.
This shows how a slight shift of the fundamental frequencies can greatly impact the overall spectral
distribution. The bottom plot displays the true activation vector and the estimates returned by the five
? ? htrue k1 together with the
methods. The table reports the value of the (arbitrary) error measure kh
run time (on an average desktop PC using a MATLAB implementation) for every method. The results
show that group-regularised variants of OST lead to best performance with very light computational
6
(a) Unmixing with shifted fundamental frequencies
Method
`1 error
Time (s)
PLCA
0.900
0.057
OTh
0.340
6.541
OST
0.534
0.006
OSTg
0.021
0.007
OSTe
0.660
0.007
(b) Unmixing with wrong harmonic amplitudes
OSTe+g
0.015
0.013
Method
`1 error
Time (s)
PLCA
0.791
0.019
OTh
0.430
6.529
OST
0.971
0.006
OSTg
0.045
0.006
OSTe
0.911
0.005
OSTe+g
0.048
0.010
Figure 3: Unmixing under model misspecification. See text for details.
burden, and without using the true harmonic dictionary. In the second experimental scenario, reported
in Fig. 3 (b), the data sample is generated by mixing the 1st and 6th elements of the harmonic
dictionary, with the right fundamental and harmonic frequencies, but where the spectral amplitudes at
the latters do not follow the exponential dampening of the template dictionary (variation of timbre).
Here again the group-regularised variants of OST outperforms the state-of-the-art approaches, both
in accuracy and run time.
Transcription of real musical data. We consider in this section the transcription of a selection
of real piano recordings, obtained from the MAPS dataset (Emiya et al., 2010). The data comes
with a ground-truth binary ?piano-roll? which indicates the active notes at every time. The note
fundamental frequencies are given in MIDI, a standard musical integer-valued frequency scale that
matches the keys of a piano, with 12 half-tones (i.e., piano keys) per octave. The spectrogram of
each recording is computed with a Hann window of size 93-ms and 50% overlap (fs = 44.1Hz). The
columns (time frames) are then normalised to produce V. Each recording is decomposed with PLCA,
OST and OSTe , with K = 60 notes (5 octaves). Half of the recording is used for validation of the
hyper-parameters and the other half is used as test data. For PLCA, we validated 4 and 3 values of the
width and amplitude dampening of the Gaussian kernels used to synthesise the dictionary. For OST,
we set = q0 in Eq. (4), which was found to satisfactorily improve the discrimination of octaves
increasingly with frequency, and validated 5 orders of magnitude of 0 . For OSTe , we additionally
validated 4 orders of magnitude of ?e . Each of the three methods returns an estimate of H. The
estimate is turned into a 0/1 piano-roll by only retaining the support of its Pn maximum entries at
every frame n, where Pn is the ground-truth number of notes played in frame n. The estimated
piano-roll is then numerically compared to its ground truth using the F-measure, a global recognition
measure which accounts both for precision and recall and which is bounded between 0 (critically
wrong) and 1 (perfect recognition). Our evaluation framework follows standard practice in music
transcription evaluation, see for example (Daniel et al., 2008). As detailed in the supplementary
material, it can be shown that OSTg and OSTe+g do not change the location of the maximum entries
in the estimates of H returned by OST and OSTe , respectively, but only their amplitude. As such, they
lead to the same F-measures than OST and OSTe , and we did not include them in the experiments of
this section.
We first illustrate the complexity of real-data spectra in Fig. 4, where the amplitudes of the first
six partials (the components corresponding to the harmonic frequencies) of a single piano note are
represented along time. Depending on the partial order q, the amplitude evolves with asynchronous
beats and with various slopes. This behaviour is characteristic of piano sounds in which each note
comes from the vibration of up to three coupled strings. As a consequence, the spectral envelope
of such notes cannot be well modelled by a fixed amplitude pattern. Fig. 4 shows that, thanks to
its flexibility, OSTe can perfectly recover the true fundamental frequency (MIDI 50) while PLCA
7
Pitch (MIDI)
(a) Thresholded OSTe transcription
80
60
40
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
2.8
2.6
2.8
Pitch (MIDI)
(b) Thresholded PLCA transcription
80
60
40
0.8
1
1.2
1.4
1.6
1.8
2
2.2
2.4
Time (s)
Figure 4: First 6 partials and transcription of a single piano note (note D3, ?0 = 147 Hz, MIDI 50).
Table 1: Recognition performance (F-measure values) and average computational unmixing times.
MAPS dataset file IDs
chpn_op25_e4_ENSTDkAm
mond_2_SptkBGAm
mond_2_SptkBGCl
muss_1_ENSTDkAm 4
muss_2_AkPnCGdD
mz_311_1_ENSTDkCl
mz_311_1_StbgTGd2
Average
Time (s)
PLCA
0.679
0.616
0.645
0.613
0.587
0.561
0.663
0.624
14.861
PLCA+noise
0.671
0.713
0.687
0.478
0.574
0.593
0.617
0.619
15.420
OST
0.566
0.470
0.583
0.513
0.531
0.580
0.701
0.563
0.004
OST+noise
0.564
0.534
0.676
0.550
0.611
0.628
0.718
0.612
0.005
OSTe
0.695
0.610
0.695
0.671
0.667
0.625
0.747
0.673
0.210
OSTe +noise
0.695
0.607
0.730
0.667
0.675
0.665
0.747
0.684
0.202
is prone to octave errors (confusions between MIDI 50 and MIDI 62). Then, Table 1 reports the
F-measures returned by the three competing approaches on seven 15-s extracts of pieces from Chopin,
Beethoven, Mussorgski and Mozart. For each of the three methods, we have also included a variant
that incorporates a flat component in the dictionary that can account for noise or non-harmonic
components. In PLCA, this merely consists in adding a constant vector wf (K+1) = 1/M to W. In
e whose amplitude has also been validated
OST or OSTe this consists in adding a constant column to C,
over 3 orders of magnitude. OST performs comparably or slightly inferiorly to PLCA but with an
impressive gain in computational time (?3000? speedup). Best overall performance is obtained with
OSTe +noise with an average ?10% performance gain over PLCA and ?750? speedup.
A Python implementation of OST and real-time demonstrator are available at https://github.
com/rflamary/OST
7
Conclusions
In this paper we have introduced a new paradigm for spectral dictionary-based music transcription.
As compared to state-of-the-art approaches, we have proposed a holistic measure of fit which is
robust to local and harmonically-related displacements of frequency energies. It is based on a
new form of transportation cost matrix that takes into account the inherent harmonic structure of
musical signals. The proposed transportation cost matrix allows in turn to use a simplistic dictionary
composed of Dirac vectors placed at the target fundamental frequencies, eliminating the problem
of choosing a meaningful dictionary. Experimental results have shown the robustness and accuracy
of the proposed approach, which strikingly does not come at the price of computational efficiency.
Instead, the particular structure of the dictionary allows for a simple algorithm that is way faster
than state-of-the-art NMF-like approaches. The proposed approach offers new foundations, with
promising results and room for improvement. In particular, we believe exciting avenues of research
concern the learning of Ch from examples and extensions to other areas such as in remote sensing,
using application-specific forms of C.
Acknowledgments. This work is supported in part by the European Research Council (ERC) under
the European Union?s Horizon 2020 research & innovation programme (project FACTORY) and by
the French ANR JCJC program MAD (ANR-14-CE27-0002). Many thanks to Antony Schutz for
generating & providing some of the musical data.
8
References
J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyr?. Iterative Bregman projections for
regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111?A1138,
2015.
N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent. Discriminative non-negative matrix factorization for multiple pitch estimation. In Proc. International Society for Music Information Retrieval
Conference (ISMIR), 2012.
N. Courty, R. Flamary, and D. Tuia. Domain adaptation with regularized optimal transport. In
Proc. European Conference on Machine Learning and Principles and Practice of Knowledge
Discovery in Databases (ECML PKDD), 2014.
M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transportation. In Advances on
Neural Information Processing Systems (NIPS), 2013.
A. Daniel, V. Emiya, and B. David. Perceptually-based evaluation of the errors usually made when
automatically transcribing music. In Proc. International Society for Music Information Retrieval
Conference (ISMIR), 2008.
V. Emiya, R. Badeau, and B. David. Multipitch estimation of piano sounds using a new probabilistic
spectral smoothness principle. IEEE Trans. Audio, Speech, and Language Processing, 18(6):
1643?1654, 2010.
T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42
(1):177?196, 2001.
J. Huang, S. Ma, H. Xie, and C.-H. Zhang. A group bridge approach for variable selection. Biometrika,
96(2):339?355, 2009.
L. Oudre, Y. Grenier, and C. F?votte. Chord recognition by fitting rescaled chroma vectors to chord
templates. IEEE Trans. Audio, Speech and Language Processing, 19(7):2222 ? 2233, 2011.
F. Rigaud, B. David, and L. Daudet. A parametric model and estimation techniques for the inharmonicity and tuning of the piano. The Journal of the Acoustical Society of America, 133(5):
3107?3118, 2013.
A. Rolet, M. Cuturi, and G. Peyr?. Fast dictionary learning with a smoothed Wasserstein loss. In
Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), 2016.
Y. Rubner, C. Tomasi, and L. Guibas. A metric for distributions with applications to image databases.
In Proc. International Conference in Computer Vision (ICCV), 1998.
R. Sandler and M. Lindenbaum. Nonnegative matrix factorization with earth mover?s distance metric
for image analysis. IEEE Trans. Pattern Analysis and Machine Intelligence, 33(8):1590?1602,
2011.
P. Smaragdis and J. C. Brown. Non-negative matrix factorization for polyphonic music transcription.
In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA),
2003.
P. Smaragdis, B. Raj, and M. V. Shashanka. A probabilistic latent variable model for acoustic
modeling. In Proc. NIPS workshop on Advances in models for acoustic processing, 2006.
R. Typke, R. C. Veltkamp, and F. Wiering. Searching notated polyphonic music using transportation
distances. In Proc. ACM International Conference on Multimedia, 2004.
C. Villani. Optimal transport: old and new. Springer, 2009.
E. Vincent, N. Bertin, and R. Badeau. Adaptive harmonic spectral decomposition for multiple pitch
estimation. IEEE Trans. Audio, Speech and Language Processing, 18:528 ? 537, 2010.
G. Zen, E. Ricci, and N. Sebe. Simultaneous ground metric learning and matrix factorization with
earth mover?s distance. In Proc. International Conference on Pattern Recognition (ICPR), 2014.
9
| 6479 |@word middle:1 eliminating:1 villani:2 plsa:1 iki:1 km:2 decomposition:6 simplifying:1 contains:2 score:1 daniel:2 denoting:1 tuned:1 outperforms:1 thre:1 com:1 activation:2 negentropy:1 must:2 realistic:1 subsequent:1 additive:1 shape:1 hofmann:2 remove:1 designed:1 plot:3 sampl:1 polyphonic:2 discrimination:2 half:4 selected:3 intelligence:2 tone:2 desktop:1 ith:2 short:1 location:1 penalises:2 zhang:1 five:3 harmonically:3 c2:4 direct:1 along:1 ik:6 azur:1 consists:7 doubly:1 fitting:1 starkly:1 comb:1 paragraph:1 indeed:5 themselves:1 pkdd:1 multi:2 sud:1 kpp:1 decomposed:3 automatically:1 actual:1 equipped:1 considering:1 increasing:1 becomes:4 spain:1 window:1 bounded:1 project:1 mass:3 null:2 what:1 string:1 plca:17 astronomy:1 finding:1 sensibility:1 every:7 penalise:1 friendly:1 universit:3 rm:1 decouples:2 facto:1 wrong:2 biometrika:1 before:1 t1:2 local:6 treat:1 consequence:2 id:1 equivalence:1 suggests:1 challenging:1 factorization:4 bi:1 kfi:1 acknowledgment:1 satisfactorily:1 transporting:1 practice:4 union:1 procedure:3 displacement:12 area:2 nenna:1 significantly:1 projection:2 word:2 regular:1 lindenbaum:3 onto:4 close:1 selection:2 sheet:1 cannot:1 context:1 effortless:1 applying:1 equivalent:1 map:2 transportation:23 regardless:1 minq:1 convex:1 resolution:2 ke:3 simplicity:2 pure:5 factorisation:1 oste:22 lewandowski:2 searching:1 variation:5 limiting:1 target:6 construction:3 heavily:2 programming:1 regularised:5 element:4 recognition:7 approximated:2 located:2 particularly:1 database:3 observed:1 role:1 majorisation:1 bottom:1 solved:1 wiering:1 remote:2 rescaled:1 marseille:1 chord:3 mentioned:1 complexity:4 cuturi:4 trained:1 solving:4 irisa:1 purely:1 efficiency:2 misalignment:1 triangle:1 strikingly:1 represented:2 various:2 america:1 univ:2 fast:2 query:1 artificial:1 hyper:2 choosing:3 whose:1 larger:1 valued:3 solve:2 supplementary:4 relax:1 anr:2 toulouse:1 ability:1 statistic:1 transform:5 itself:1 melfrequency:1 propose:2 product:1 fr:4 adaptation:1 relevant:2 combining:1 turned:2 holistic:1 mixing:2 flexibility:2 flamary:3 frobenius:1 kh:1 dirac:12 convergence:1 unmixing:17 produce:2 perfect:1 generating:1 object:1 tk:3 illustrate:2 depending:1 ij:2 eq:9 strong:1 solves:1 implemented:1 involves:1 implies:1 come:3 quantify:1 annotated:1 material:5 virtual:1 bin:3 behaviour:1 benamou:2 f1:2 ricci:1 decompose:1 elementary:1 adjusted:1 extension:2 considered:4 ground:5 guibas:1 exp:2 dictionary:33 achieves:1 entropic:2 vary:1 earth:4 estimation:7 proc:9 khn:2 tik:2 council:1 vibration:1 bridge:1 establishes:1 weighted:1 gaussian:2 dric:1 pn:2 chromatic:1 lifted:1 minimisation:1 encode:1 validated:4 improvement:1 modelling:1 indicates:2 hk:3 contrast:2 greatly:1 a1111:1 baseline:1 sense:3 wf:1 dependent:1 cnrs:4 typically:6 chopin:1 pixel:1 arg:1 sandler:3 issue:1 classification:1 flexible:1 aforementioned:1 retaining:1 denoted:2 oca:1 development:1 art:7 lif:2 summed:1 constrained:1 equal:2 construct:1 having:1 sampling:4 unsupervised:1 theart:1 discrepancy:1 mimic:1 report:2 realisation:1 inherent:3 few:2 composed:3 simultaneously:1 divergence:17 mover:3 individual:1 consisting:2 divisor:2 hann:1 dampening:3 cip:1 a1138:1 evaluation:3 mixture:1 light:3 pc:1 oth:4 bregman:2 nowadays:1 partial:3 indexed:2 loosely:1 old:1 re:1 bretagne:1 theoretical:3 stopped:1 column:21 modeling:1 downside:1 tuia:1 assignment:2 ill:1 cost:24 introducing:3 entry:15 subset:3 hundred:1 valentin:2 peyr:2 reported:2 antony:1 learnt:1 considerably:1 combined:1 synthetic:1 st:2 thanks:2 fundamental:28 siam:1 international:6 probabilistic:4 vm:1 aix:1 connecting:1 together:2 again:2 central:1 opposed:1 hn:2 zen:2 choose:1 containing:1 huang:2 leading:2 return:3 toy:1 account:5 distribute:1 de:2 diversity:1 chemistry:1 wk:1 representativeness:1 coefficient:8 matter:1 explicitly:1 depends:1 vi:8 piece:1 later:1 qfj:1 closed:3 sebe:1 start:2 recover:1 vin:1 slope:1 contribution:1 square:1 partite:1 accuracy:2 roll:3 musical:10 characteristic:3 identify:2 modelled:1 vincent:5 accurately:1 critically:1 comparably:1 mere:2 q6:3 simultaneous:1 penalised:2 energy:9 waspaa:1 frequency:67 mi:1 propagated:1 sampled:3 gain:2 dataset:2 notated:2 wh:6 recall:2 knowledge:1 penalising:1 amplitude:10 actually:1 cik:3 appears:1 xie:1 follow:1 shashanka:1 generality:1 furthermore:1 parlance:1 until:1 hand:1 replacing:2 transport:2 french:1 defines:2 scientific:1 believe:1 building:1 effect:1 brown:2 true:4 remedy:2 counterpart:2 unregularised:2 hence:1 symmetric:3 iteratively:1 semantic:2 illustrated:1 deal:1 mozart:1 adjacent:1 fevotte:1 width:1 m:1 octave:7 mpl:1 presenting:1 confusion:1 unice:1 performs:1 bring:1 fj:6 characterising:1 image:2 wise:3 harmonic:31 fi:13 common:1 physical:1 disproportionally:2 insensitive:1 discussed:2 slight:2 jcjc:1 numerically:1 refer:1 composition:1 significant:1 smoothness:1 tuning:1 trivially:2 pm:2 populated:1 similarly:1 erc:1 badeau:2 language:3 impressive:1 sinkhorn:1 etc:1 something:1 linearisation:1 raj:1 scenario:5 inequality:1 ost:28 success:1 binary:1 minimum:2 additional:3 wasserstein:4 impose:1 mr:1 spectrogram:3 employed:1 paradigm:2 v3:3 signal:7 semi:1 smoother:1 multiple:6 sound:4 desirable:1 full:2 reduces:3 lik:1 exceeds:1 match:1 faster:1 offer:2 retrieval:3 dkl:5 promotes:2 impact:1 pitch:7 variant:6 simplistic:2 optimisation:10 metric:5 essentially:1 vision:1 iteration:5 represent:2 tral:1 kernel:2 want:1 addressed:1 appropriately:1 irit:2 ot:17 permissible:1 eliminates:1 unlike:1 envelope:1 file:1 hz:3 recording:5 incorporates:1 adequacy:2 integer:1 yk22:1 constraining:1 bengio:1 fit:10 winnertakes:1 fm:2 perfectly:1 competing:1 inner:1 lightspeed:1 avenue:1 intensive:1 shift:6 dch:2 favour:1 six:1 handled:1 f:2 carlier:1 returned:3 transcribing:1 speaking:1 speech:3 matlab:1 tij:6 useful:1 detailed:1 endorsed:1 amount:5 ten:1 welcome:1 multipitch:1 reduced:2 demonstrator:1 http:1 holistically:1 canonical:1 shifted:1 estimated:1 per:6 discrete:3 dropping:1 group:9 iter:5 key:2 d3:1 thresholded:2 ht:5 veltkamp:1 v1:3 graph:1 merely:1 rolet:2 sum:6 run:3 everywhere:3 harmonicity:1 qmax:2 ismir:2 vn:12 separation:1 bit:1 def:1 ki:4 played:3 display:3 smaragdis:4 quadratic:4 nonnegative:2 constraint:5 precisely:2 flat:1 encodes:1 fourier:1 argument:1 min:8 separable:2 speedup:2 synthesise:1 icpr:1 belonging:1 hertz:1 describes:1 ate:1 increasingly:2 separability:1 slightly:1 appealing:2 evolves:1 explained:6 invariant:6 iccv:1 ceiling:1 computationally:2 previously:1 turn:2 describing:1 instrument:3 end:3 t1m:1 available:1 permit:1 v2:2 spectral:31 rac:1 robustness:3 unreasonably:1 denotes:1 top:1 include:1 a4:1 music:14 coined:2 k1:6 boulanger:2 society:3 objective:3 parametric:3 strategy:1 usual:1 subspace:1 distance:9 separate:1 simulated:2 evenly:1 seven:1 acoustical:1 mad:1 besides:1 index:3 providing:1 innovation:1 cij:10 negative:9 mink:1 design:1 implementation:3 upper:1 observation:2 beethoven:1 ecml:1 beat:1 variability:2 misspecification:1 ubs:1 frame:10 perturbation:1 dc:6 smoothed:1 arbitrary:1 nmf:3 inferred:1 introduced:2 david:3 namely:1 kl:4 tomasi:1 acoustic:3 barcelona:1 nip:3 trans:4 address:3 able:1 capped:1 below:1 pattern:3 usually:1 sparsity:1 challenge:1 program:3 built:1 including:1 power:1 unrealistic:1 demanding:1 difficulty:2 rely:2 treated:1 overlap:1 regularized:2 wik:1 improve:1 github:1 negativity:1 coupled:1 extract:1 text:2 piano:12 discovery:1 python:1 cedric:1 relative:2 regularisation:6 fully:1 loss:1 limitation:1 bertin:1 validation:1 foundation:1 rubner:2 conveyed:1 exciting:1 principle:2 row:1 prone:1 course:1 placed:8 supported:1 free:1 asynchronous:1 populate:1 normalised:3 fall:1 template:19 face:1 sparse:2 dimension:6 valid:1 quantum:1 computes:1 overall:2 made:1 adaptive:2 simplified:1 dc2:1 programme:1 cope:1 approximate:1 midi:7 transcription:14 dealing:1 emiya:5 global:1 active:1 harm:2 summing:2 assumed:1 discriminative:1 spectrum:9 search:1 latent:3 iterative:2 reality:1 additionally:5 promising:1 nature:2 transported:8 table:3 nicolas:1 robust:1 improving:1 expansion:1 european:3 elegantly:1 domain:1 sp:1 did:1 aistats:1 noise:5 courty:4 fig:8 representative:2 referred:1 embeds:1 precision:1 exponential:2 factory:1 lie:1 embed:1 specific:4 sensing:2 timbre:4 concern:2 consist:1 burden:1 workshop:2 adding:2 ci:5 hyperspectral:1 magnitude:4 te:2 labelling:1 perceptually:1 kx:1 horizon:1 entropy:1 led:1 remi:1 simply:4 contained:2 springer:1 ch:9 corresponds:1 truth:3 satisfies:1 daudet:1 acm:1 ma:1 room:1 price:1 hard:1 change:1 included:1 typical:2 characterised:1 operates:1 infinite:1 except:2 denoising:1 called:2 multimedia:1 invariance:3 experimental:3 meaningful:2 support:5 latter:3 audio:9 handling:1 |
6,057 | 648 | On-Line Estimation of the Optimal Value
Function: HJB-Estimators
James K. Peterson
Department of Mathematical Sciences
Martin Hall Box 341907
Clemson University
Clemson, SC 29634-1907
email: petersonOmath. clemson. edu
Abstract
In this paper, we discuss on-line estimation strategies that model
the optimal value function of a typical optimal control problem.
We present a general strategy that uses local corridor solutions
obtained via dynamic programming to provide local optimal control sequence training data for a neural architecture model of the
optimal value function.
ION-LINE ESTIMATORS
In this paper, the problems of adaptive control using neural architectures are explored in the setting of general on-line estimators. 'Ve will try to pay close attention
to the underlying mathematical structure that arises in the on-line estimation process.
The complete effect of a control action Uk at a given time step t/.; is clouded by
the fact that the state history depends on the control actions taken after time
step tk' So the effect of a control action over all future time must be monitored .
Hence, choice of control must inevitably involve knowledge of the future history
of the state trajectory. In other words, the optimal control sequence can not be
determined until after the fact. Of course, standard optimal control theory supplies
an optimal control sequence to this problem for a variety of performance criteria.
Roughly, there are two approaches of interest: solving the two-point boundary value
319
320
Peterson
problem arising from the solution of Pontryagin 's maximum or minimum principle or
solving the Hamilton-J acobi-Bellman (HJB) partial differential equation. However,
the computational burdens associated with these schemes may be too high for realtime use. Is it possible to essentially use on-line estimation to build a solution
to either of these two classical techniques at a lower cost? In other words, if TJ
samples are taken of the system from some initial point under some initial sequence
of control actions, can this time series be use to obtain information about the true
optimal sequence of controls that should be used in the next TJ time steps?
We will focus here on algorithm designs for on-line estimation of the optimal control law that are implement able in a control step time of 20 milliseconds or less.
vVe will use local learning methods such as CMAC (Cerebellar Model Articulated
Controllers) architectures (Albus, 1 and W. Miller, 7), and estimators for characterizations of the optimal value function via solutions of the Hamilton-Jacobi-Bellman
equation, (adaptive critic type methods), (Barto, 2; Werbos, 12).
2
CLASSICAL CONTROL STRATEGIES
In order to discuss on-line estimation schemes based on the Hamilton- JacobiBellman equation, we now introduce a common sample problem:
mm
uEU
J(x, u, t)
(1)
where
J(x, u, t)
dist(y(tf), r)
+i
t
t!
L(y(s), u(s), s) ds
(2)
Subject to:
y'(s)
y(t)
f(y(s), u(s), s), t::; s ::; tf
x
(3)
(4)
y(s)
E
y (s) ~ RN , t::; s ::; t f
(5)
u(s)
E
U (s) ~ RM, t::; s ::; t f
(6)
Here y and u are the state vector and control vector of the system, respectively; U is
the space of functions that the control must be chosen from during the minimization
process and ( 4) - ( 6) give the initialization and constraint conditions that the
state and control must satisfy. The set r represents a target constraint set and
dist(y(tf), r) indicates the distance from the final state y(tf) to the constraint set
r. The optimal value of this problem for t.he initial state x and time t will be
denoted by J(x, t) where
J(x, t)
minJ(x,u,t).
u
On-Line Estimation of the Optimal Value Function: HJB-Estimators
It is well known that the optimal value function J(x, t) satisfies a generalized partial
differential equation known as the Hamilton-J acobi-Bellman (HJB) equation.
aJ(x, t)
at
J(x,t,)
. {(
L x, u, t ) + aJ(x,
ax t) I ( x, u, t )}
m~n
dist(x, f)
In the case that J is indeed differentiable with respect to both the state and time
arguments, this equation is interpreted in the usual way. However, there are many
problems where the optimal value function is not differentiable, even though it
is bounded and continuous. In these cases, the optimal value function J can be
interpreted as a viscosity solution of the HJB equation and the partial derivatives
of J are replaced by the sub and superdifferentials of J (Crandall, 5). In general,
once the HJB equation is solved, the optimal control from state x and time t is then
given by the minimum condition
U
E
. { L(x,u,t)+ aJ(x,t)
ax I (x,u,t )}
argm~n
If the underlying state and time space are discretized using a state mesh of resolution
r and a time mesh of resolution s, the HJB equation can be rewritten into the form
of the standard Bellman Principle of Optimality (BPO):
where X(Xi, u) indicates the new state achieved by using control u over time interval
[tj,tj+d from initial state Xi. In practice, this equat.ion is solved by successive
iterations of the form:
denotes the iteration cycle and the process is started by initializing
J~~ (Xi, tj) in a suitable manner. Generally, the iterations continue until the values
J;tl(Xi,tj) and J;tl(Xi,tj) differ by negligible amounts. This iterative process is
usually referred to as dynamic programming (DP). Once this iterative process converges, let Jr~(Xi,tj) = limT->ooJ:~, and consider linl(r,s)->(O,O) Jrs(xi,tj), where
(xi, tj) indicates that the discrete grid points depend on the resolution (r, s). In
many situations, this limit gives the viscosity solution J(x, t) to the HJB equation.
where
T
Now consider the problem of finding J(x,O). The Pontrya.gin minimum principle
gives first order necessary conditions that the optimal state x and costate p variables
must satisfy. Letting fl(x, u, p, t) = L(x, u, t) + pT I(x, u, t) and defining
321
322
Peterson
H(x,p, t)
min H(x, u, p, t),
u
(7)
the optimal state and costate then must satisfy the following two-point boundary
value problem (TPBVP):
'(t) -
oH(x,p,t)
op
,
x
x(O)
x,
=
p'(t) = _ aH~;p,t)
p(tj) = 0
(8)
and the optimal control is obtained from ( 7) once the optimal state and costate
are determined. Note that ( 7) can not necessarily be solved for the control u in
terms of x and p, i.e. a feedback law may not be possible. If the TPBVP can
not be solved, then we set J(x,O) = 00. In conclusion, in this problem, we are led
inevitably to an optimal value function that can be poorly behaved; hence, we can
easily imagine that at many (x, t), ~; is not available and hence J will not satisfy
the HJB equation in the usual sense. So if we estimate J directly using some form
of on-line estimation, how can we hope to back out the control law if ~; is not
available?
3
HJB ESTIMATORS
A potential on-line estimation technique can be based on approximations of the
optimal value function. Since the optimal value function should satisfy the HJB
equation, these methods will be grouped under the broad classification HJD estimators.
Assume that there is a given initial state Xo with start time O. Consider a local
patch, or local corridor, of the state space around the initial state xo, denoted by
n(xo). The exact size ofO(xo) will depend on the nature of the state dynamics and
the starting state. If O( xo) is then discretized using a coarse grid of resolution r
and the time domain is discretized using resolution s, an approximat.e dynamic programming problem can be formulated and solved using the BPa equations. Since
the new states obtained via integration of the plant dynamics will in general not
land on coarse grid lines, some sort of interpolation must be used to assign the
integrated new state value an appropriate coarse grid value. This can be done using
the coarse encoding implied by the grid resolution r of O(xo). In addition, multiple
grid resolutions may be used with coarse and fine grid approximations interacting
with one another as in multigrid schemes (Briggs, 3). The optimal value function
so obtained will be denoted by Jr~(Zi,tj) for any discrete grid point Zi E O(xo) and
time point t j. This approximate solution also supplies an estimate of the optimal
control sequence (u*)?j-l
(u*)'j-l(Zi,tj)' Some papers on approximate dynamic
programming are (Peterson, 8; (Sutton, 10; Luus, 6). It is also possible to obtain
estimates of the optimal control sequences, states and costates using an 7J step lookahead and the Pontryagin minimum principle. The associated two point boundary
value problem is solved and the controls computed via Ui E arg minu H(x;, u, pi, ti)
where (x*)ri and (P*)ri are the calculated optimal state and costate sequences respectively. This approach is developed in (Peterson, 9) and implemelltated for
=
On-Line Estimation of the Optimal Value Function: HJB-Estimators
vibration suppression in a large space structure, by (Carlson, Rothermel and Lee,
4)
For any Zi E n(xo), let (u){j-1 - (u)J- 1(Zi' tj) be a control sequence used from
initial state Zi and time point tj. Thus Uij is the control used on time interval
[tj,tj+1] from start point Zi. Define zl/1
Z(Zi,Uij,tj), the state obtained by
integrating the plant dynamics one time step using control Uij and initial state Zi"
Then Ui,j+1 is the control used on time interval [tj+1, tj+2] from start point zl/l
and the new state is
z(zl/l, Ui,j+l, ij+d; in general, Ui,j+k is the control
used on time interval [tj+k, tj+k+1] from start point zl/k and the new state is
- Z(j+k
) , were
h
Zijj+k+1 =
Zij ,Ui,j+k, tj+k
Zijj = Zi?
=
zl/2
=
Let's now assume that optimal control information Uij (we will dispense with the
superscript * labeling for expositional cleanness) is available at each of the discrete
grid points (Zi, tj) E n(xo). Let <Prs(Zi, tj) denote the value of a neural architecture
(CMAC, feedforward, associative etc.) which is trained as follows using this optimal
information for 0 ~ k < T} - j - 1 (the equation below holds for the converged
value of the network's parameters and the actual dependence of the network on
those parameters is notationally suppressed):
?+k
<Prs (zfj ,tj+k)
where 0
< e,
(
~
=
"+k+l
"+k
e<Prs (zfj
,tj+k+d + (~(zfj ,Ui,j+k)
1 and we define a typical reinforcement function
~
(9)
by
(10)
if j
if k
~
k
< T} -
= T} -
j - 1
1
(11)
For notational convenience, we will now drop the notational dependence on the time
grid points and simply refer to the reinforcement by ~(zf/k, Ui,j+k)
Then applying ( 9) repeatedly, for any 0 ~ p ~
'1] -
i,
p-1
e
<Prs (zf/ P, t j+p )
+(
E e 3i(zf/k, Ui,j+k)
k=O
Thus, the function
w
r .?
can be defined by
(12)
323
324
Peterson
where the term uif7 will be interpreted as Uj,1}-1.
It follows then that since Uij is optimal,
=
Clearly, the function <Prs(Zi, tj)
Wrs(Zi' tj, 1, 1) estimates the optimal value
Jrs (Zi, tj) itself. (See, Q-Learning (Watkins,
11?.
An alternate approach that does not model J indirectly, as is done above, is to
train a neural model <Prs(Zi,tj) directly on the data J(Zi,tj) that is computed in
each local corridor calculation. In either case, the above observations lead to the
following algorithm:
Initialization:
Here, the iteration count is r = O. For given starting state Xo and local look
ahead of 7J time steps, form the local corridor O(xo) and solve the associated
approximate BPO equation for Jrs(Zi, tj). Compute the associated optimal
control sequences for each (Zi,tj) pair, (u*){j-1
(u*)1- 1(Zi,tj)' Initialize
the neural architecture for the optimal value estimate using cI>~8(Zi' tj) =
J r 8 (Zi , t i)'
Estilnate of New Optimal Control Sequence:
For the next TJ time steps, an estimate must be made of the next optimal
control action in time interval [t f7 +k, t f7 +k+1]' The initial state is any Zi in
O( xf7) (xf7 is one such choice) and the initial time is tf7' For the time interval
[tf7, t f7 +1], if the model <P~8 (Zi, tj) is differentiable, the new control can be
estimated by
=
Uf7 +1
E
arg ~in
{
L(zf7,u,tf7)(tf7+1 -tTl) }
+ (zf7' tf7)
f(zf7,u,t f7 )(t1}+l -tf7)
a:.:.
For ease of notation, let Zf7+1 denote the new state obtained using the
control Uf7 +1 on the interval [tf7' t f7 H]' Then choose the next control via
Clearly, if Zf7+ k denote the new state obtained using the control tt f7 +k-1 on
the interval [t,/+k, t f7 +k+1], the next control is chosen to satisfy
E
On-Line Estimation of the Optimal Value Function: HJB-Estimators
Alternately, if the neural architecture is not differentiable (that is
not availa.ble), the new control action can be computed via
0:;, is
E
Update of the Neural Estimator:
The new starting point for the dynamics is now x1/ and there is a new
associateclloca.l corridor n( x1/). The neural estimator is then updated using
either the HJB or the BPa equations over the local corridor n(x1/). Using
the BPa equations, for all Zi E n(x1/) the updates are:
)1-
1 indicates the optimal control estimates obtained in the prewhere (it
vious algorithm step. Finally, using the HJB equation, for all Zi E n(x1/)
the updates are:
~~s (Zi, t 77 +1+1) + mJn {
L( Zi, u, t1/+1) (t77+1 +1 - t'I+1) }
+
(Zi, t 77 +i)
!(Zi,u,t 77 +i)(t 77 +i+1 -t 77 +i)
a:;,
Comparison to BPO optimal control sequence:
Now solve the associated approximate BPa equation for each Zi in the local
corridor n(x1/) for Jrs(Zi' t 77 +j). Compute the new approximate optimal
control sequences for each (Zi' t 77 +j) pair, (u* )~~j 1 (u* )~~j 1 (Zi, t 77 +i) and
compare them to the estimated sequences (it )~~j 1. If the discrepancy is
out of tolerance (this is a design decision) initialize the neural architecture
for the optimal value estimate using ~~s(Zi,t'1+i)
Jrs (Zi,t 77 +i). If the
discrepancy is acceptable, terminate the BPa approximation calculations
for M future iterations and use the neural architectures alone for on-line
estimation.
=
=
The determination of the stability and convergence properties of anyon-line approximation procedure of this sort is intimately connected with the the optimal value
function which solves the generalized HJB equation. We conjecture the following
limit converges to a viscosity solution of the HJB equation for the given optimal
control problem:
J(x, t)
Further, there are stability questions and there are interesting issues relating to the
use of multiple state resolutions rl and r2 and the corresponding different approximations to J, leading to the use of multigrid like methods on the HJ B equation
(see, for example, Briggs, 3). Also note that there is an advantage to using CMAC
325
326
Peterson
architectures for the approximation of the optimal value function J j since J need
not be smooth, the CMAC's lack of differentiability wit.h respect to its inputs is not
a problem and in fact is a virtue.
Acknowledgements
We acknowledge the partial support of NASA grant NAG 3-1311 from the Lewis
Research Center.
References
1. Albus, J. 1975. "A New Approach to Manipulator Control: The Cerebellar
Model Articulation Controller (CMAC)." J. Dynamic Systems, Measurement and Control, 220 - 227.
2. Barto, A., R. Sutton, C. Anderson. 1983 "Neuronlike Adaptive Elements
That Can Solve Difficult Learning Control Problems." IEEE Trans. Systems, Man Cybernetics, Vol. SMC-13, No.5, September/October, 834 846.
3. Briggs, W. 1987. A Multigrid Tutorial, SIAM, Philadelphia, PA.
4. Carlson, R., C. Lee and K. Rothermel. 1992. "Real Time Neural Control
of an Active Structure", Artificial Neural Networks in Engineering
2, 623 - 628.
5. Crandall, M. and P. Lions. 1983. "Viscosity solutions of Hamilton-Jacobi
Equations." Trans. American Math. Soc., Vol. 277, No.1, 1 - 42.
6. Luus, R. 1990. " Optimal Control by Dynamic Programming Using Systematic Reduction of Grid Size", Int. J. Control, Vol. 51, No.5, 995 - 1013.
7. Miller, W. 1987. "Sensor-Based Control of Robotic Manipulators Using as
General Learning Algorithm." IEEE J. Robot. Automat., Vol RA-3, No.2,
157 - 165
8. Peterson, J. 1992. "Neural Network Approaches to Estimating Directional
Cost Information and Path Planning in Analog Valued Obstacle Fields",
HEURISTICS: The Journal of Knowledge Engineering, Special Issue on
Artificial Neural Networks, Vol. 5, No.2, Summer, 50 - 61.
9. Peterson, J. 1992. "On-Line Estimation of Optimal Control Sequences:
Pontryagin Estimators", Artificial Neural Networks in Engineering
2, ed. Dagli et. al., 579 - 584.
10. Sutton, R. 1991. " Planning by Incremental Dynamic Programming", Proceedings of the Ninth International Workshop on Machine Learning, 353 357.
11. Watkins, C. 1989. Learning From Delayed Rewards, Ph. D. Dissertation, King's College.
12. Werbos, P. 1990. "A Menu of Designs for Reinforcement Learning Over
Time". In Neural Networks for Control, Ed. Miller, W. R. Sutton and
P. Werbos, 67 - 96.
| 648 |@word automat:1 reduction:1 initial:10 series:1 zij:1 expositional:1 must:8 mesh:2 drop:1 update:3 alone:1 argm:1 dissertation:1 coarse:5 characterization:1 math:1 successive:1 ofo:1 mathematical:2 differential:2 corridor:7 supply:2 ooj:1 hjb:17 manner:1 introduce:1 ra:1 indeed:1 roughly:1 dist:3 planning:2 discretized:3 bellman:4 actual:1 estimating:1 underlying:2 bounded:1 notation:1 ttl:1 multigrid:3 interpreted:3 developed:1 finding:1 ti:1 rm:1 uk:1 control:54 zl:5 grant:1 hamilton:5 t1:2 negligible:1 engineering:3 local:10 limit:2 sutton:4 encoding:1 path:1 interpolation:1 initialization:2 ease:1 smc:1 practice:1 implement:1 procedure:1 cmac:5 vious:1 word:2 integrating:1 convenience:1 close:1 applying:1 center:1 attention:1 starting:3 resolution:8 wit:1 estimator:12 oh:1 menu:1 stability:2 updated:1 target:1 pt:1 imagine:1 exact:1 programming:6 us:1 pa:1 element:1 werbos:3 solved:6 initializing:1 cycle:1 connected:1 equat:1 ui:8 dispense:1 reward:1 dynamic:11 trained:1 depend:2 solving:2 easily:1 articulated:1 train:1 artificial:3 sc:1 crandall:2 labeling:1 heuristic:1 solve:3 valued:1 itself:1 final:1 superscript:1 associative:1 sequence:15 differentiable:4 advantage:1 poorly:1 lookahead:1 albus:2 convergence:1 incremental:1 converges:2 tk:1 ij:1 op:1 solves:1 soc:1 differ:1 assign:1 mm:1 hold:1 around:1 hall:1 minu:1 estimation:13 f7:7 vibration:1 grouped:1 tf:4 minimization:1 hope:1 clearly:2 sensor:1 hj:1 barto:2 ax:2 focus:1 notational:2 indicates:4 suppression:1 sense:1 integrated:1 uij:5 arg:2 classification:1 issue:2 denoted:3 integration:1 initialize:2 special:1 field:1 once:3 represents:1 broad:1 look:1 future:3 discrepancy:2 mjn:1 ve:1 delayed:1 replaced:1 neuronlike:1 interest:1 tj:38 partial:4 necessary:1 obstacle:1 cost:2 too:1 international:1 siam:1 lee:2 systematic:1 choose:1 american:1 derivative:1 leading:1 potential:1 int:1 satisfy:6 depends:1 try:1 start:4 sort:2 wrs:1 miller:3 directional:1 trajectory:1 cybernetics:1 history:2 ah:1 converged:1 minj:1 ed:2 email:1 james:1 associated:5 jacobi:2 monitored:1 knowledge:2 jacobibellman:1 back:1 nasa:1 done:2 box:1 though:1 anderson:1 until:2 d:1 approximat:1 lack:1 aj:3 behaved:1 manipulator:2 effect:2 true:1 hence:3 during:1 criterion:1 generalized:2 complete:1 tt:1 common:1 rl:1 analog:1 he:1 relating:1 refer:1 measurement:1 grid:11 robot:1 linl:1 etc:1 continue:1 minimum:4 multiple:2 smooth:1 determination:1 calculation:2 controller:2 essentially:1 iteration:5 cerebellar:2 limt:1 achieved:1 ion:2 addition:1 fine:1 interval:8 subject:1 feedforward:1 variety:1 zi:36 architecture:9 action:6 repeatedly:1 generally:1 involve:1 amount:1 viscosity:4 ph:1 differentiability:1 millisecond:1 tutorial:1 estimated:2 arising:1 discrete:3 vol:5 patch:1 realtime:1 ble:1 decision:1 acceptable:1 fl:1 pay:1 summer:1 bpa:5 ahead:1 constraint:3 ri:2 bpo:3 argument:1 optimality:1 min:1 notationally:1 martin:1 conjecture:1 department:1 alternate:1 jr:7 suppressed:1 intimately:1 pr:6 xo:11 taken:2 equation:23 discus:2 count:1 clemson:3 letting:1 briggs:3 available:3 rewritten:1 appropriate:1 indirectly:1 denotes:1 carlson:2 build:1 uj:1 classical:2 implied:1 question:1 strategy:3 dependence:2 usual:2 gin:1 september:1 dp:1 distance:1 difficult:1 october:1 design:3 availa:1 zf:3 observation:1 acknowledge:1 inevitably:2 situation:1 defining:1 rn:1 interacting:1 ninth:1 pair:2 alternately:1 trans:2 able:1 usually:1 below:1 lion:1 articulation:1 suitable:1 superdifferentials:1 scheme:3 started:1 philadelphia:1 acknowledgement:1 law:3 plant:2 interesting:1 principle:4 critic:1 land:1 pi:1 course:1 peterson:9 tolerance:1 boundary:3 feedback:1 calculated:1 made:1 adaptive:3 reinforcement:3 approximate:5 active:1 robotic:1 nag:1 xi:8 continuous:1 iterative:2 nature:1 terminate:1 necessarily:1 domain:1 x1:6 vve:1 referred:1 tl:2 sub:1 ueu:1 watkins:2 explored:1 r2:1 virtue:1 burden:1 workshop:1 ci:1 led:1 simply:1 satisfies:1 lewis:1 formulated:1 king:1 man:1 typical:2 determined:2 rothermel:2 pontryagin:3 college:1 support:1 arises:1 |
6,058 | 6,480 | Coevolutionary Latent Feature Processes for
Continuous-Time User-Item Interactions
Yichen Wang? , Nan Du? , Rakshit Trivedi? , Le Song?
?
Google Research
?
College of Computing, Georgia Institute of Technology
{yichen.wang, rstrivedi}@gatech.edu, dunan@google.com
lsong@cc.gatech.edu
Abstract
Matching users to the right items at the right time is a fundamental task in recommendation systems. As users interact with different items over time, users? and
items? feature may evolve and co-evolve over time. Traditional models based on
static latent features or discretizing time into epochs can become ineffective for
capturing the fine-grained temporal dynamics in the user-item interactions. We
propose a coevolutionary latent feature process model that accurately captures the
coevolving nature of users? and items? feature. To learn parameters, we design
an efficient convex optimization algorithm with a novel low rank space sharing
constraints. Extensive experiments on diverse real-world datasets demonstrate significant improvements in user behavior prediction compared to state-of-the-arts.
1
Introduction
Online social platforms and service websites, such as Reddit, Netflix and Amazon, are attracting
thousands of users every minute. Effectively recommending the appropriate service items is a
fundamentally important task for these online services. By understanding the needs of users and
serving them with potentially interesting items, these online platforms can improve the satisfaction of
users, and boost the activities or revenue of the sites due to increased user postings, product purchases,
virtual transactions, and/or advertisement clicks [30, 9].
As the famous saying goes ?You are what you eat and you think what you read?, both users? interests
and items? semantic features are dynamic and can evolve over time [18, 4]. The interactions between
users and service items play a critical role in driving the evolution of user interests and item features.
For example, for movie streaming services, a long-time fan of comedy watches an interesting science
fiction movie one day, and starts to watch more science fiction movies in place of comedies. Likewise,
a single movie may also serve different segment of audiences at different times. For example, a movie
initially targeted for an older generation may become popular among the younger generation, and the
features of this movie need to be redefined.
Another important aspect is that users? interests and items? features can co-evolve over time, that
is, their evolutions are intertwined and can influence each other. For instance, in online discussion
forums, such as Reddit, although a group (item) is initially created for political topics, users with very
different interest profiles can join this group (user ! item). Therefore, the participants can shape
the actual direction (or features) of the group through their postings and responses. It is not unlikely
that this group can eventually become one about education simply because most users here concern
about education (item ! user). As the group is evolving towards topics on education, some users
may become more attracted to education topics, and to the extent that they even participate in other
dedicated groups on education. On the opposite side, some users may gradually gain interests in
sports groups, lose interests in political topics and become inactive in this group. Such coevolutionary
nature of user-item interactions raises very interesting questions on how to model them elegantly and
how to learn them from observed interaction data.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Nowadays, user-item interaction data are archived in increasing temporal resolution and becoming
increasingly available. Each individual user-item iteration is typically logged in the database with
the precise time-stamp of the interaction, together with additional context of that interaction, such
as tag, text, image, audio and video. Furthermore, the user-item interaction data are generated in an
asynchronous fashion in a sense that any user can interact with any item at any time and there may
not be any coordination or synchronization between two interaction events. These types of event data
call for new representations, models, learning and inference algorithms.
Despite the temporal and asynchronous nature of such event data, for a long-time, the data has
been treated predominantly as a static graph, and fixed latent features have been assigned to each
user and item [21, 5, 2, 10, 29, 30, 25]. In more sophisticated methods, the time is divided into
epochs, and static latent feature models are applied to each epoch to capture some temporal aspects
of the data [18, 17, 28, 6, 13, 4, 20, 17, 28, 12, 15, 24, 23]. For such epoch-based methods, it is not
clear how to choose the epoch length parameter due to the asynchronous nature of the user-item
interactions. First, different users may have very different time-scale when they interact with those
service items, making it very difficult to choose a unified epoch length. Second, it is not easy for
the learned model to answer fine-grained time-sensitive queries such as when a user will come
back for a particular service item. It can only make such predictions down to the resolution of the
chosen epoch length. Most recently, [9] proposed an efficient low-rank point process model for
time-sensitive recommendations from recurrent user activities. However, it still fails to capture the
heterogeneous coevolutionary properties of user-item interactions with much more limited model
flexibility. Furthermore, it is difficult for this approach to incorporate observed context features.
In this paper, we propose a coevolutionary latent feature process for continuous-time user-item
interactions, which is designed specifically to take into account the asynchronous nature of event
data, and the co-evolution nature of users? and items? latent features. Our model assigns an evolving
latent feature process for each user and item, and the co-evolution of these latent feature processes is
considered using two parallel components:
? (Item ! User) A user?s latent feature is determined by the latent features of the items he interacted
with. Furthermore, the contributions of these items? features are temporally discounted by an
exponential decaying kernel function, which we call the Hawkes [14] feature averaging process.
? (User ! Item) Conversely, an item?s latent features are determined by the latent features of the
users who interact with the item. Similarly, the contribution of these users? features is also modeled
as a Hawkes feature averaging process.
Besides the two sets of intertwined latent feature processes, our model can also take into account
the presence of potentially high dimensional observed context features and links the latent features
to the observed context features using a low dimensional projection. Despite the sophistication of
our model, we show that the model parameter estimation, a seemingly non-convex problem, can
be transformed into a convex optimization problem, which can be efficiently solved by the latest
conditional gradient-like algorithm. Finally, the coevolutionary latent feature processes can be used
for down-streaming inference tasks such as the next-item and the return-time prediction. We evaluate
our method over a variety of datasets, verifying that our method can lead to significant improvements
in user behavior prediction compared to the state-of-the-arts.
2
Background on Temporal Point Processes
This section provides necessary concepts of the temporal point process [7]. It is a random process
whose realization consists of a list of events localized in time, {ti } with ti 2 R+ . Equivalently, a given
temporal point process can be represented as a counting process, N (t), which records the number of
events before time t. An important way to characterize temporal point processes is via the conditional
intensity function (t), a stochastic model for the time of the next event given all the previous events.
Formally, (t)dt is the conditional probability of observing an event in a small window [t, t+dt) given
the history T (t) up to t, i.e., (t)dt := P {event in [t, t + dt)|T (t)} = E[dN (t)|T (t)], where one
typically assumes that only one event can happen in a small window of size dt, i.e., dN (t) 2 {0, 1}.
The function form of the intensity is often designed to capture the phenomena of interests. One
commonly used form is the Hawkes
Pprocess [14, 11, 27, 26], whose intensity models the excitation
between events, i.e., (t) = ? + ? ti 2T (t) ?! (t ti ), where ?! (t) := exp( !t) is an exponential
triggering kernel, ? > 0 is a baseline intensity independent of the history. Here, the occurrence of
each historical event increases the intensity by a certain amount determined by the kernel ?! and
the weight ? > 0, making the intensity history dependent and a stochastic process by itself. From
2
12
12
(
David
12
K
12
K
Alice
12
K
12
(
12
(
12
K
12
Christine
12
K
K
#, ) = / ? 1, )
Item
feature
!" ($)
12
Interaction
feature
)($)
(
User
Jacobfeature
&' ($)
12
K
+3 ?
!
12
Alice
12
Base drift
K
45 ) ? )( '(
+45 ) ? )+ '+
Interaction feature
!
45 ) ? )( %78 ()()
+45 ) ? )+ %79 ()+)
Coevolution: Item feature
(#$ ,&, '$, ($) (#* , &, '* , (*) (#+ , &, '+ , (+)
David
(b) User latent feature process
Jacob
Alice
-. ( ? ($ #01 (($)
+ +-. ( ? ($ #0 ((*)
2
+-. ( ? ($ #03 ((+)
Coevolution: User feature
12
K
Interaction feature
!
12
!
12
!
12
(#, %+ , '+ ,)+ )
+
12
(a) Data as a bipartite graph
12
(#, %& , '(, )( )
&4 ( = 6 84 (
Base drift
K
K
12
K
12
K
(c) Item latent feature process
Figure 1: Model illustration. (a) User-item interaction events data. Each edge contains user, item,
time, and interaction feature. (b) Alice?s latent feature consists of three components: the drift of
baseline feature, the time-weighted average of interaction feature, and the weighted average of item
feature. (c) The symmetric item latent feature process. A, B, C, D are embedding matrices from
high dimension feature space to latent space. ?! (t) = exp( !t) is an exponential decaying kernel.
the survival analysis theory [1], given the history T = {t1 , . . . , tn }, for any t > tn , we characterize
Rt
the conditional probability that no event happens during [tn , t) as S(t|T ) = exp
(? ) d? .
tn
Moreover, the conditional density that an event occurs at time t is f (t|T ) = (t) S(t|T ).
3
Coevolutionary Latent Feature Processes
In this section, we present the framework to model the temporal dynamics of user-item interactions.
We first explicitly capture the co-evolving nature of users? and items? latent features. Then, based on
the compatibility between a user? and item?s latent feature, we model the user-item interaction by a
temporal point process and parametrize the intensity function by the feature compatibility.
3.1
Event Representation
Given m users and n items, the input consists of all users? history events: T = {ek }, where
ek = (uk , ik , tk , qk ) means that user uk interacts with item ik at time tk and generates an interaction
feature vector qk 2 RD . For instance, the interaction feature can be a textual message delivered
from the user to the chatting-group in Reddit or a review of the business in Yelp. It can also be
unobservable if the data only contains the temporal information.
3.2
Latent Feature Processes
We associate a latent feature vector uu (t) 2 RK with a user u and ii (t) 2 RK with an item i. These
features represent the subtle properties which cannot be directly observed, such as the interests of a
user and the semantic topics of an item. Specifically, we model uu (t) and ii (t) as follows:
User latent feature process. For each user u, we formulate uu (t) as:
X
X
uu (t) = A u (t) +B
?! (t tk )qk +
?! (t
| {z }
{ek |uk =u,tk <t}
{ek |uk =u,tk <t}
base drift
|
{z
} |
{z
Hawkes interaction feature averaging
(1)
}
co-evolution: Hawkes item feature averaging
Item latent feature process. For each item i, we specify ii (t) as:
X
X
ii (t) = C i (t) +D
?! (t tk )qk +
?! (t
| {z }
{e
|i
=i,t
<t}
{e
|i
=i,t
<t}
k
k
k
k
k
k
base drift
|
{z
} |
{z
Hawkes interaction feature averaging
tk )iik (tk ),
tk )uuk (tk ),
(2)
}
co-evolution: Hawkes user feature averaging
where A, B, C, D 2 RK?D are the embedding matrices mapping from the explicit high-dimensional
feature space into the low-rank latent feature space. Figure 1 highlights the basic setting of our model.
Next we discuss the rationale of each term in detail.
Drift of base features. u (t) 2 RD and i (t) 2 RD are the explicitly observed properties of user u
and item i, which allows the basic features of users (e.g., a user?s self-crafted interests) and items (e.g.,
textual categories and descriptions) to smoothly drift through time. Such changes of basic features
normally are caused by external influences. One can parametrize u (t) and i (t) in many different
ways, e.g., the exponential decaying basis to interpolate these features observed at different times.
3
Evolution with interaction feature. Users? and items? features can evolve and be influenced by
the characteristics of their interactions. For instance, the genre changes of movies indicate the
changing tastes of users. The theme of a chatting-group can be easily shifted to certain topics of
the involved discussions. In consequence, this term captures the cumulative influence of the past
interaction features to the changes of the latent user (item) features. The triggering kernel ?! (t tk )
associated with each past interaction at tk quantifies how such influence can change through time. Its
parametrization depends on the phenomena of interest. Without loss of generality, we choose the
exponential kernel ?! (t) = exp ( !t) to reduce the influence of each past event. In other words,
only the most recent interaction events will have bigger influences. Finally, the embedding B, D
map the observable high dimension interaction feature to the latent space.
Coevolution with Hawkes feature averaging processes. Users? and items? latent features can
mutually influence each other. This term captures the two parallel processes:
? Item ! User. A user?s latent feature is determined by the latent features of the items he interacted
with. At each time tk , the latent item feature is iik (tk ). Furthermore, the contributions of these
items? features are temporally discounted by a kernel function ?! (t), which we call the Hawkes
feature averaging process. The name comes from the fact that Hawkes process captures the
temporal influence of history events in its intensity function. In our model, we capture both the
temporal influence and feature of each history item as a latent process.
? User ! Item. Conversely, an item?s latent features are determined by the latent features of all
the users who interact with the item. At each time tk , the latent feature is uuk (tk ). Similarly, the
contribution of these users? features is also modeled as a Hawkes feature averaging process.
Note that to compute the third co-evolution term, we need to keep track of the user?s and item?s latent
features after each interaction event, i.e., at tk , we need to compute uuk (tk ) and iik (tk ) in (1) and
(2), respectively. Set I(?) to be the indicator function, we can show by induction that
h Xk
i
h Xk
i
uuk (tk ) = A
I[uj = uk ]?! (tk tj ) uj (tj ) + B
I[uj = uk ]?! (tk tj )qj
j=1
j=1
h Xk 1
i
h Xk 1
i
+C
I[uj = uk ]?! (tk tj ) ij (tj ) + D
I[uj = uk ]?! (tk tj )qj
j=1
j=1
h Xk
i
h Xk
i
iik (tk ) = C
I[ij = ik ]?! (tk tj ) ij (tj ) + D
I[ij = ik ]?! (tk tj )qj
j=1
j=1
h Xk 1
i
h Xk 1
i
+A
I[ij = ik ]?! (tk tj ) uj (tj ) + B
I[ij = ik ]?! (tk tj )qj
j=1
j=1
In summary, we have incorporated both of the exogenous and endogenous influences into a single
model. First, each process evolves according to the respective exogenous base temporal user (item)
features u (t) ( i (t)). Second, the two processes also inter-depend on each other due to the endogenous influences from the interaction features and the entangled latent features. We present our model
in the most general form and the specific choices of uu (t), ii (t) are dependent on applications. For
example, if no interaction feature is observed, we drop the second term in (1) and (2).
3.3
User-Item Interactions as Temporal Point Processes
For each user, we model the recurrent occurrences of user u?s interaction with all items as a multidimensional temporal point process. In particular, the intensity in the i-th dimension (item i) is:
u,i
(t) =
? u,i
|{z}
+
long-term preference
uu (t)> ii (t) ,
|
{z
}
(3)
short-term preference
where ? = (? ) is a baseline preference matrix. The rationale of this formulation is threefold.
First, instead of discretizing the time, we explicitly model the timing of each event occurrence as a
continuous random variable, which naturally captures the heterogeneity of the temporal interactions
between users and items. Second, the base intensity ? u,i represents the long-term preference of user
u to item i, independent of the history. Third, the tendency for user u to interact with item i at time t
depends on the compatibility of their instantaneous latent features. Such compatibility is evaluated
through the inner product of their time-varying latent features.
u,i
Our model inherits the merits from classic content filtering, collaborative filtering, and the most
recent temporal models. For the cold-start users having few interactions with the items, the model
adaptively utilizes the purely observed user (item) base properties and interaction features to adjust
its predictions, which incorporates the key idea of feature-based algorithms. When the observed
4
features are missing and non-informative, the model makes use of the user-item interaction patterns to
make predictions, which is the strength of collaborative filtering algorithms. However, being different
from the conventional matrix-factorization models, the latent user and item features in our model are
entangled and able to co-evolve over time. Finally, the general temporal point process ingredient of
the model makes it possible to capture the dynamic preferences of users to items and their recurrent
interactions, which is more flexible and expressive.
4
Parameter Estimation
In this section, we propose an efficient framework to learn the parameters. A key challenge is that
the objective function is non-convex in the parameters. However, we reformulate it as a convex
optimization by creating new parameters. Finally, we present the generalized conditional gradient
algorithm to efficiently solve the objective function.
Given a collection of events T recorded within a time window [0, T ), we estimate the parameters
using maximum likelihood estimation of all events. The joint negative log-likelihood [1] is:
m X
n Z T
X
X
u,i
`=
log uk ,ik (tk ) +
(? ) d?
(4)
ek
u=1 i=1
0
The objective function is non-convex in variables {A, B, C, D} due to the inner product term in (3).
To learn these parameters, one way is to fix the matrix rank and update each matrix using gradient
based methods. However, it is easily trapped in local optima and one needs to tune the rank for the
best performance. However, with the observation that the product of two low rank matrices yields a
low rank matrix, we will optimize over the new matrices and obtain a convex objective function.
4.1 Convex Objective Function
We will create new parameters such that the intensity function is convex. Since uu (t) contains the
averaging of iik (tk ) in (1), C, D will appear in uu (t). Similarly, A, B will appear in ii (t). Hence
these matrices X = A> A, B > B, C > C, D > D, A> B, A> C, A> D, B > C, B > D, C > D will
appear in (3) after expansion, due to the inner product ii (t)> uu (t). For each matrix product in
X , we denote it as a new variable Xi and optimize the objective function over the these variables.
We denote the corresponding coefficient of Xi as xi (t), which can be exactly computed. Denote
?(t) = ( u,i (t)), we can rewrite the intensity in (3) in the matrix form as:
X10
?(t) = ? +
xi (t)Xi
(5)
i=1
The intensity is convex in each new variable Xi , hence the objective function. We will optimize over
the new set of variables X subject to the constraints that i) some of them share the same low rank
space, e.g., A> is shared as the column space in A> A, A> B, A> C, A> D and ii) new variables
are low rank (the product of low rank matrices is low rank). Next, we show how to incorporate the
space sharing constraint for general objective function with an efficient algorithm.
First, we create a symmetric block matrix X 2 R4D?4D and place each Xi as follows:
0
1 0 >
1
>
>
>
X1
B X>
X = @ X2>
3
X4>
X2
X5
X6>
X7>
X3
X6
X8
X9>
X4
A A
X7 C B B > A
=
X9 A @ C > A
X10
D> A
A B
B>B
C>B
D> B
A C
B>C
C>C
D> C
A D
B>D C
A
C>D
>
D D
(6)
Intuitively, minimizing the nuclear norm of X ensures all the low rank space sharing constraints.
First, nuclear norm k ? k? is a summation of all singular values, and is commonly used as a convex
surrogate for the matrix rank function [22], hence minimizing kXk? ensures it to be low rank and
gives the unique low rank factorization of X. Second, since X1 , X2 , X3 , X4 are in the same row
and share A> , the space sharing constraints are naturally satisfied.
Finally, since it is typically believed that users? long-time preference to items can be categorized into
a limited number of prototypical types, we set ? to be low rank. Hence the objective is:
min
?>0,X>0
`(X, ?) + ?k?k? + kXk? + kX
X > k2F
(7)
where ` is defined in (4) and k ? kF is the Frobenius norm and the associated constraint ensures X to
be symmetric. {?, , } control the trade-off between the constraints. After obtaining X, one can
directly apply (5) to compute the intensity and make predictions.
5
4.2
Generalized Conditional Gradient Algorithm
We use the latest generalized conditional gradient algorithm [9] to solve the optimization problem (7).
We provide details in the appendix. It has an alternating updates scheme and efficiently handles
the nonnegative constraint using the proximal gradient descent and the the nuclear norm constraint
using conditional gradient descent. It is guaranteed to converge in O( 1t + t12 ), where t is the number
of iterations. For both the proximal and the conditional gradient parts, the algorithm achieves
the corresponding optimal convergence rates. If there is no nuclear norm constraint, the results
recover the well-known optimal O( t12 ) rate achieved by proximal gradient method for smooth convex
optimization. If there is no nonnegative constraints, the results recover the well-known O( 1t ) rate
attained by conditional gradient method for smooth convex minimization. Moreover, the per-iteration
complexity is linear in the total number of events with O(mnk), where m is the number of users, n
is the number of items and k is the number of events per user-item pair.
5
Experiments
We evaluate our framework, C OEVOLVE, on synthetic and real-world datasets. We use all the events
up to time T ? p as the training data, and the rest as testing data, where T is the length of the
observation window. We tune hyper-parameters and the latent rank of other baselines using 10-fold
cross validation with grid search. We vary the proportion p 2 {0.7, 0.72, 0.74, 0.76, 0.78} and report
the averaged results over five runs on two tasks:
(a) Item recommendation:
user u, at every testing time t, we compute the survival probabilR t for each
u,i
ity S u,i (t) = exp
(?
)d?
of each item i up to time t, where tu,i
u,i
n is the last training
tn
event time of (u, i). We then rank all the items in the ascending order of S u,i (t) to produce a
recommendation list. Ideally, the item associated with the testing time t should rank one, hence
smaller value indicates better predictive performance. We repeat the evaluation on each testing
moment and report the Mean Average Rank (MAR) of the respective testing items across all users.
(b) Time prediction: we predict the time when a testing event will occur between a given user-item
pair (u, i) by calculating the density of the next event time as f (t) = u,i (t)S u,i (t). With the
density, we compute the expected time of next event by sampling future events as in [9]. We report
the Mean Absolute Error (MAE) between the predicted and true time. Furthermore, we also report
the relative percentage of the prediction error with respect to the entire testing time window.
5.1
Competitors
TimeSVD++ is the classic matrix factorization method [18]. The latent factors of users and items are
designed as decay functions of time and also linked to each other based on time. FIP is a static low
rank latent factor model to uncover the compatibility between user and item features [29]. TSVD++
and FIP are only designed for data with explicit ratings. We convert the series of user-item interaction
events into an explicit rating using the frequency of a user?s item consumptions [3]. STIC fits
a semi-hidden markov model to each observed user-item pair [16] and is only designed for time
prediction. PoissonTensor uses Poisson regression as the loss function [6] and has been shown to
outperform factorization methods based on squared loss [17, 28] on recommendation tasks. There are
two choices of reporting performance: i) use the parameters fitted only in the last time interval and
ii) use the average parameters over all intervals. We report the best performance between these two
choices. LowRankHawkes is a Hawkes process based model and it assumes user-item interactions
are independent [9].
5.2
Experiments on Synthetic Data
We simulate 1,000 users and 1,000 items. For each user, we further generate 10,000 events by Ogata?s
thinning algorithm [19]. We compute the MAE by comparing estimated ?, X with the ground-truth.
The baseline drift feature is set to be constant. Figure 2 (a) shows that it only requires a few hundred
iterations to descend to a decent error, and (b) indicates that it only requires a modest number of
events to achieve a good estimation. Finally, (c) demonstrates that our method scales linearly as the
total number of training events grows.
Figure 2 (d-f) show that C OEVOLVE achieves the best predictive performance. Because P OISSON T ENSOR applies an extra time dimension and fits each time interval as a Poisson regression, it
outperforms T IME SVD++ by capturing the fine-grained temporal dynamics. Finally, our method
automatically adapts different contributions of each past item factors to better capture the users?
current latent features, hence it can achieve the best prediction performance overall.
6
103
0.30
0.4
Parameters
X
?
Parameters
X
?
0.25
time(s)
MAE
MAE
0.3
0.20
102
0.2
0.15
0.1
0.10
0
100
200
300
#iterations
400
500
2000
(a) MAE by iterations
6000
8000
#events
10000
101
(c) Scalability
Methods
1000 Coevolving
LowRankHawkes
PoissonTensor
STIC
415.2
810
900
15
340
Coevolving
LowRankHawkes
PoissonTensor
STIC
18
16.2
425.3
MAE
100
100
Err %
410.3
106
#events
Methods
Coevolving
1000 DynamicPoisson
LowRankHawkes
PoissonTensor
TimeSVD++
FIP
347.2
105
104
(b) MAE by events
Methods
MAR
4000
10
42.8
10
10
23.3
6.8
5
10
0.2
1
Methods
(d) Item recommendation
1
0
Methods
(e) Time prediction (MAE)
0.2
Methods
(f) Time prediction (relative)
Figure 2: Estimation error (a) vs. #iterations and (b) vs. #events per user; (c) scalability vs. #events
per user; (d) average rank of the recommended items; (e) and (f) time prediction error.
5.3
Experiments on Real-World Data
Datasets. Our datasets are obtained from three different domains from the TV streaming services
(IPTV), the commercial review website (Yelp) and the online media services (Reddit). IPTV contains
7,100 users? watching history of 436 TV programs in 11 months, with 2,392,010 events, and 1,420
movie features, including 1,073 actors, 312 directors, 22 genres, 8 countries and 5 years. Yelp is
available from Yelp Dataset challenge Round 7. It contains reviews for various businesses from
October, 2004 to December, 2015. We filter users with more than 100 posts and it contains 100
users and 17,213 businesses with around 35,093 reviews. Reddit contains the discussions events in
January 2014. Furthermore, we randomly selected 1,000 users and collect 1,403 groups that these
users have discussion in, with a total of 10,000 discussion events. For item base feature, IPTV has
movie feature, Yelp has business description, and Reddit does not have it. In experiments we fix the
baseline features. There is no base feature for user. For interaction feature, Reddit and Yelp have
reviews in bag-of-words, and no such feature in IPTV.
Figure 3 shows the predictive performance. For time prediction, C OEVOLVE outperforms the baselines
significantly, since we explicitly reason and model the effect that past consumption behaviors change
users? interests and items? features. In particular, compared with L OW R ANK H AWKES, our model
captures the interactions of each user-item pair with a multi-dimensional temporal point processes. It is
more expressive than the respective one-dimensional Hawkes process used by L OW R ANK H AWKES,
which ignores the mutual influence among items. Furthermore, since the unit time is hour, the
improvement over the state-of-art on IPTV is around two weeks and on Reddit is around two days.
Hence our method significantly helps online services make better demand predictions.
For item recommendation, C OEVOLVE also achieves competitive performance comparable with
L OW R ANK H AWKES on IPTV and Reddit. The reason behind the phenomena is that one needs to
compute the rank of the intensity function for the item prediction task, and the value of intensity
function for time prediction. L OW R ANK H AWKES might be good at differentiating the rank of
intensity better than C OEVOLVE. However, it may not be able to learn the actual value of the intensity
accurately. Hence our method has the order of magnitude improvement in the time prediction task.
In addition to the superb predictive performance, C OEVOLVE also learns the time-varying latent
features of users and items. Figure 4 (a) shows that the user is initially interested in TV programs
of adventures, but then the interest changes to Sitcom, Family and Comedy and finally switches to
the Romance TV programs. Figure 4 (b) shows that Facebook and Apple are the two hot topics in
the month of January 2014. The discussions about Apple suddenly increased on 01/21/2014, which
7
Methods
LowRankHawkes
PoissonTensor
STIC
Err %
Methods
Methods
Methods
Coevolving
LowRankHawkes
PoissonTensor
STIC
540.7
100
Coevolving
LowRankHawkes
PoissonTensor
STIC
203
186.4
MAE
67.2
10
13.2
10
8.1
9.1
2.5
1.1
1
1
0
Methods
1000
Coevolving
LowRankHawkes
PoissonTensor
TimeSVD++
FIP
8100.3
7800.1
8320.5
Methods
Methods
Methods
Coevolving
LowRankHawkes724.3
PoissonTensor
STIC
21.6
18.8
17
15
Err %
MAE
MAR
Yelp
Coevolving
LowRankHawkes
PoissonTensor
STIC
20
883
768.4
125.9
90.1
80.1
1.1
Methods
Methods
1000
27.2
25.1
20
Err %
510.7
450.1
0.4
Methods
MAR
10
0.4
0
Coevolving
LowRankHawkes
PoissonTensor
TimeSVD++
FIP
100
4.4
3
1.8
Methods
11.2
10.3
6
10
Methods
Reddit
9
34.5
10.4
Coevolving
LowRankHawkes
PoissonTensor
STIC
901.1
830.2
356
10
1
Methods
1000 Coevolving
191.3
177.2
150.3
MAE
MAR
IPTV
12
Methods
Coevolving
LowRankHawkes
PoissonTensor
TimeSVD++
FIP
100
10
10
10
5
1.82
0
Methods
Methods
(a) Item recommendation
Methods
(b) Time prediction (MAE)
(c) Time prediction (relative)
Figure 3: Prediction results on IPTV, Reddit and Yelp. Results are averaged over five runs with
different portions of training data and error bar represents the variance.
Category
0.25
(a) Feature for a user in IPTV
01/29
01/27
01/25
01/23
01/21
01/19
01/17
01/15
01/13
01/11
01/09
0.00
11/16
10/27
10/07
09/17
08/28
08/08
07/19
06/29
06/09
05/20
04/30
04/10
03/21
03/01
02/10
01/21
01/01
0.00
0.50
01/07
0.25
0.75
01/05
0.50
Macbook
Antivirus
Intel
Camera
Interface
Samsung
Bill
Privacy
Twitter
Cable
Wikipedia
Desktop
Watch
Price
Software
Computer
Power
Youtube
Network
Service
Facebook
Apple
01/01
0.75
Category
1.00
Action
Horror
Modern
History
Child
Idol
Drama
Adventure
Costume
Carton
Sitcom
Comedy
Crime
Romance
Suspense
Thriller
Family
Fantasy
Fiction
Kung.fu
Mystery
War
01/03
1.00
(b) Feature for the Technology group in Reddit
Figure 4: Learned time-varying features of a user in IPTV and a group in Reddit.
can be traced to the news that Apple won lawsuit against Samsung1 . It further demonstrates that our
model can better explain and capture the user behavior in the real world.
6
Conclusion
We have proposed an efficient framework for modeling the co-evolution nature of users? and items?
latent features. Empirical evaluations on large synthetic and real-world datasets demonstrate its scalability and superior predictive performance. Future work includes extending it to other applications
such as modeling dynamics of social groups, and understanding peoples? behaviors on Q&A sites.
Acknowledge. This project was supported in part by NSF/NIH BIGDATA 1R01GM108341, ONR
N00014-15-1-2340, NSF IIS-1218749, and NSF CAREER IIS-1350983.
1
http://techcrunch.com/2014/01/22/apple-wins-big-against-samsung-in-court/
8
References
[1] O. Aalen, O. Borgan, and H. Gjessing. Survival and event history analysis: a process point of view.
Springer, 2008.
[2] D. Agarwal and B.-C. Chen. Regression-based latent factor models. In J. Elder, F. Fogelman-Souli?,
P. Flach, and M. Zaki, editors, KDD, 2009.
[3] L. Baltrunas and X. Amatriain. Towards time-dependant recommendation based on implicit feedback,
2009.
[4] L. Charlin, R. Ranganath, J. McInerney, and D. M. Blei. Dynamic poisson factorization. In RecSys, 2015.
[5] Y. Chen, D. Pavlov, and J. Canny. Large-scale behavioral targeting. In J. Elder, F. Fogelman-Souli?,
P. Flach, and M. J. Zaki, editors, KDD, 2009.
[6] E. C. Chi and T. G. Kolda. On tensors, sparsity, and nonnegative factorizations. SIAM Journal on Matrix
Analysis and Applications, 33(4):1272?1299, 2012.
[7] D. Cox and P. Lewis. Multivariate point processes. Selected Statistical Papers of Sir David Cox: Volume 1,
Design of Investigations, Statistical Methods and Applications, 1:159, 2006.
[8] J. K. Cullum and R. A. Willoughby. Lanczos Algorithms for Large Symmetric Eigenvalue Computations:
Vol. 1: Theory, volume 41. SIAM, 2002.
[9] N. Du, Y. Wang, N. He, and L. Song. Time sensitive recommendation from recurrent user activities. In
NIPS, 2015.
[10] M. D. Ekstrand, J. T. Riedl, and J. A. Konstan. Collaborative filtering recommender systems. Foundations
and Trends in Human-Computer Interaction, 4(2):81?173, 2011.
[11] M. Farajtabar, Y. Wang, M. Gomez-Rodriguez, S. Li, H. Zha, and L. Song. Coevolve: A joint point
process model for information diffusion and network co-evolution. In NIPS, 2015.
[12] P. Gopalan, J. M. Hofman, and D. M. Blei. Scalable recommendation with hierarchical poisson factorization. UAI, 2015.
[13] S. Gultekin and J. Paisley. A collaborative kalman filter for time-evolving dyadic processes. In ICDM,
pages 140?149, 2014.
[14] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83?
90, 1971.
[15] B. Hidasi and D. Tikk. General factorization framework for context-aware recommendations. Data Mining
and Knowledge Discovery, pages 1?30, 2015.
[16] K. Kapoor, K. Subbian, J. Srivastava, and P. Schrater. Just in time recommendations: Modeling the
dynamics of boredom in activity streams. In WSDM, 2015.
[17] A. Karatzoglou, X. Amatriain, L. Baltrunas, and N. Oliver. Multiverse recommendation: n-dimensional
tensor factorization for context-aware collaborative filtering. In Recsys, 2010.
[18] Y. Koren. Collaborative filtering with temporal dynamics. In KDD, 2009.
[19] Y. Ogata. On lewis? simulation method for point processes. IEEE Transactions on Information Theory,
27(1):23?31, 1981.
[20] J. Z. J. L. Preeti Bhargava, Thomas Phan. Who, what, when, and where: Multi-dimensional collaborative
recommendations using tensor factorization on sparse user-generated data. In WWW, 2015.
[21] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using markov chain monte
carlo. In ICML, 2008.
[22] S. Sastry. Some np-complete problems in linear algebra. Honors Projects, 1990.
[23] X. Wang, R. Donaldson, C. Nell, P. Gorniak, M. Ester, and J. Bu. Recommending groups to users using
user-group engagement and time-dependent matrix factorization. In AAAI, 2016.
[24] Y. Wang, R. Chen, J. Ghosh, J. C. Denny, A. Kho, Y. Chen, B. A. Malin, and J. Sun. Rubik: Knowledge
guided tensor factorization and completion for health data analytics. In KDD, 2015.
[25] Y. Wang and A. Pal. Detecting emotions in social media: A constrained optimization approach. In IJCAI,
2015.
[26] Y. Wang, E. Theodorou, A. Verma, and L. Song. A stochastic differential equation framework for guiding
information diffusion. arXiv preprint arXiv:1603.09021, 2016.
[27] Y. Wang, B. Xie, N. Du, and L. Song. Isotonic hawkes processes. In ICML, 2016.
[28] L. Xiong, X. Chen, T.-K. Huang, J. G. Schneider, and J. G. Carbonell. Temporal collaborative filtering
with bayesian probabilistic tensor factorization. In SDM, 2010.
[29] S.-H. Yang, B. Long, A. Smola, N. Sadagopan, Z. Zheng, and H. Zha. Like like alike: joint friendship and
interest propagation in social networks. In WWW, 2011.
[30] X. Yi, L. Hong, E. Zhong, N. N. Liu, and S. Rajan. Beyond clicks: Dwell time for personalization. In
RecSys, 2014.
9
| 6480 |@word cox:2 norm:5 proportion:1 flach:2 simulation:1 jacob:1 moment:1 liu:1 contains:7 series:1 past:5 outperforms:2 err:4 current:1 com:2 comparing:1 nell:1 attracted:1 romance:2 happen:1 informative:1 kdd:4 shape:1 designed:5 drop:1 update:2 v:3 selected:2 website:2 item:107 desktop:1 xk:8 parametrization:1 short:1 record:1 blei:2 provides:1 detecting:1 preference:6 five:2 kho:1 dn:2 become:5 differential:1 ik:7 director:1 consists:3 behavioral:1 privacy:1 inter:1 expected:1 behavior:5 multi:2 chi:1 wsdm:1 discounted:2 salakhutdinov:1 automatically:1 actual:2 window:5 increasing:1 spain:1 project:2 moreover:2 medium:2 what:3 fantasy:1 reddit:13 superb:1 unified:1 ghosh:1 temporal:23 every:2 multidimensional:1 ti:4 exactly:1 biometrika:1 demonstrates:2 uk:9 control:1 normally:1 unit:1 appear:3 before:1 service:11 t1:1 timing:1 local:1 yelp:8 consequence:1 despite:2 becoming:1 might:1 baltrunas:2 conversely:2 collect:1 alice:4 co:11 pavlov:1 limited:2 factorization:14 analytics:1 averaged:2 unique:1 camera:1 testing:7 drama:1 block:1 x3:2 cold:1 empirical:1 evolving:4 significantly:2 matching:1 projection:1 word:2 cannot:1 targeting:1 preeti:1 context:6 influence:12 donaldson:1 isotonic:1 optimize:3 conventional:1 map:1 bill:1 missing:1 www:2 go:1 latest:2 convex:13 resolution:2 formulate:1 amazon:1 assigns:1 nuclear:4 ity:1 embedding:3 classic:2 handle:1 kolda:1 play:1 commercial:1 user:119 us:1 associate:1 trend:1 database:1 observed:11 role:1 preprint:1 wang:9 capture:14 solved:1 thousand:1 verifying:1 t12:2 ensures:3 descend:1 news:1 sun:1 trade:1 gjessing:1 coevolutionary:7 borgan:1 complexity:1 ideally:1 dynamic:9 raise:1 depend:1 segment:1 rewrite:1 hofman:1 predictive:5 serve:1 purely:1 bipartite:1 algebra:1 basis:1 easily:2 joint:3 samsung:2 represented:1 various:1 genre:2 souli:2 monte:1 query:1 iptv:10 hyper:1 whose:2 solve:2 think:1 itself:1 delivered:1 online:6 seemingly:1 sdm:1 eigenvalue:1 propose:3 interaction:46 product:7 chatting:2 canny:1 tu:1 denny:1 realization:1 kapoor:1 horror:1 flexibility:1 achieve:2 adapts:1 description:2 frobenius:1 scalability:3 interacted:2 convergence:1 optimum:1 extending:1 ijcai:1 produce:1 tk:31 help:1 recurrent:4 completion:1 rakshit:1 ij:6 lsong:1 predicted:1 come:2 uu:9 indicate:1 direction:1 guided:1 filter:2 stochastic:3 human:1 oisson:1 karatzoglou:1 virtual:1 education:5 fix:2 investigation:1 summation:1 around:3 considered:1 ground:1 exp:5 mapping:1 predict:1 week:1 driving:1 achieves:3 vary:1 estimation:5 lose:1 bag:1 coordination:1 sensitive:3 create:2 weighted:2 minimization:1 zhong:1 varying:3 gatech:2 inherits:1 improvement:4 rank:24 likelihood:2 indicates:2 political:2 baseline:7 sense:1 inference:2 twitter:1 dependent:3 streaming:3 unlikely:1 typically:3 entire:1 initially:3 hidden:1 transformed:1 interested:1 compatibility:5 unobservable:1 among:2 flexible:1 fogelman:2 overall:1 art:3 platform:2 constrained:1 mutual:1 emotion:1 aware:2 having:1 sampling:1 x4:3 represents:2 k2f:1 icml:2 purchase:1 future:2 report:5 np:1 fundamentally:1 few:2 modern:1 randomly:1 ime:1 interpolate:1 individual:1 interest:13 message:1 mining:1 mnih:1 zheng:1 evaluation:2 adjust:1 personalization:1 behind:1 tj:12 r01gm108341:1 tikk:1 chain:1 oliver:1 fu:1 nowadays:1 edge:1 necessary:1 respective:3 modest:1 fitted:1 increased:2 instance:3 column:1 modeling:3 yichen:2 suspense:1 lanczos:1 hundred:1 theodorou:1 pal:1 characterize:2 answer:1 proximal:3 synthetic:3 engagement:1 adaptively:1 density:3 fundamental:1 siam:2 coevolve:1 probabilistic:2 off:1 bu:1 together:1 squared:1 aaai:1 recorded:1 x9:2 satisfied:1 choose:3 huang:1 ester:1 watching:1 external:1 creating:1 ek:5 return:1 li:1 account:2 archived:1 includes:1 coefficient:1 explicitly:4 caused:1 depends:2 stream:1 view:1 endogenous:2 exogenous:2 observing:1 linked:1 portion:1 netflix:1 start:2 participant:1 parallel:2 decaying:3 recover:2 competitive:1 idol:1 zha:2 contribution:5 collaborative:8 variance:1 qk:4 who:3 likewise:1 efficiently:3 characteristic:1 yield:1 famous:1 bayesian:2 accurately:2 carlo:1 cc:1 apple:5 history:11 explain:1 influenced:1 sharing:4 facebook:2 competitor:1 against:2 frequency:1 involved:1 naturally:2 associated:3 static:4 gain:1 rubik:1 dataset:1 macbook:1 popular:1 knowledge:2 subtle:1 sophisticated:1 uncover:1 back:1 thinning:1 elder:2 attained:1 dt:5 day:2 x6:2 zaki:2 response:1 specify:1 xie:1 formulation:1 evaluated:1 charlin:1 mar:5 generality:1 furthermore:7 just:1 implicit:1 smola:1 expressive:2 propagation:1 google:2 rodriguez:1 dependant:1 grows:1 name:1 effect:1 concept:1 true:1 evolution:10 hence:8 assigned:1 read:1 symmetric:4 alternating:1 semantic:2 round:1 x5:1 during:1 self:2 hawkes:15 excitation:1 won:1 hong:1 generalized:3 complete:1 demonstrate:2 tn:5 coevolving:13 dedicated:1 christine:1 interface:1 image:1 adventure:2 instantaneous:1 novel:1 recently:1 predominantly:1 wikipedia:1 superior:1 nih:1 volume:2 he:3 mae:12 schrater:1 significant:2 paisley:1 rd:3 grid:1 sastry:1 similarly:3 actor:1 attracting:1 base:10 multivariate:1 recent:2 certain:2 n00014:1 honor:1 discretizing:2 onr:1 ensor:1 yi:1 additional:1 schneider:1 converge:1 recommended:1 ii:12 semi:1 x10:2 smooth:2 believed:1 long:6 cross:1 divided:1 icdm:1 post:1 mcinerney:1 bigger:1 prediction:22 scalable:1 basic:3 regression:3 heterogeneous:1 poisson:4 arxiv:2 iteration:7 kernel:7 represent:1 agarwal:1 achieved:1 audience:1 younger:1 background:1 addition:1 fine:3 interval:3 entangled:2 ank:4 singular:1 country:1 extra:1 rest:1 ineffective:1 subject:1 december:1 incorporates:1 call:3 presence:1 counting:1 yang:1 easy:1 decent:1 variety:1 switch:1 fit:2 coevolution:3 click:2 opposite:1 triggering:2 reduce:1 inner:3 idea:1 court:1 dunan:1 qj:4 inactive:1 war:1 song:5 action:1 clear:1 gopalan:1 tune:2 amount:1 category:3 lawsuit:1 generate:1 http:1 outperform:1 percentage:1 nsf:3 fiction:3 shifted:1 trapped:1 estimated:1 track:1 per:4 serving:1 diverse:1 mnk:1 intertwined:2 threefold:1 vol:1 tsvd:1 group:16 key:2 rajan:1 traced:1 changing:1 diffusion:2 graph:2 convert:1 year:1 run:2 mystery:1 you:4 logged:1 place:2 saying:1 reporting:1 family:2 farajtabar:1 utilizes:1 appendix:1 comparable:1 capturing:2 nan:1 guaranteed:1 carton:1 gomez:1 fan:1 fold:1 koren:1 dwell:1 nonnegative:3 activity:4 strength:1 occur:1 constraint:11 x2:3 software:1 tag:1 generates:1 aspect:2 x7:2 simulate:1 min:1 eat:1 tv:4 according:1 riedl:1 smaller:1 across:1 increasingly:1 evolves:1 making:2 happens:1 cable:1 amatriain:2 alike:1 intuitively:1 gradually:1 stic:9 equation:1 mutually:2 discus:1 eventually:1 merit:1 ascending:1 available:2 parametrize:2 costume:1 apply:1 hierarchical:1 appropriate:1 occurrence:3 xiong:1 thomas:1 assumes:2 sitcom:2 calculating:1 uj:6 forum:1 suddenly:1 tensor:5 objective:9 question:1 occurs:1 rt:1 traditional:1 interacts:1 surrogate:1 gradient:10 ow:4 win:1 link:1 consumption:2 participate:1 topic:7 recsys:3 carbonell:1 extent:1 reason:2 induction:1 length:4 besides:1 modeled:2 kalman:1 illustration:1 reformulate:1 minimizing:2 equivalently:1 difficult:2 october:1 potentially:2 negative:1 design:2 redefined:1 recommender:1 observation:2 datasets:6 markov:2 acknowledge:1 descent:2 january:2 heterogeneity:1 incorporated:1 precise:1 incorporate:2 multiverse:1 intensity:18 drift:8 rating:2 david:3 pair:4 extensive:1 crime:1 comedy:4 learned:2 textual:2 boost:1 barcelona:1 nip:3 hour:1 able:2 bar:1 beyond:1 pattern:1 thriller:1 malin:1 sparsity:1 challenge:2 program:3 including:1 video:1 hot:1 critical:1 satisfaction:1 event:46 treated:1 business:4 power:1 indicator:1 bhargava:1 older:1 improve:1 movie:9 technology:2 scheme:1 temporally:2 created:1 x8:1 health:1 text:1 epoch:7 understanding:2 review:5 taste:1 kf:1 evolve:6 discovery:1 relative:3 sir:1 synchronization:1 loss:3 highlight:1 rationale:2 interesting:3 generation:2 filtering:7 prototypical:1 subbian:1 localized:1 ingredient:1 revenue:1 validation:1 foundation:1 hidasi:1 exciting:2 editor:2 verma:1 share:2 row:1 summary:1 repeat:1 last:2 asynchronous:4 supported:1 side:1 institute:1 differentiating:1 absolute:1 sparse:1 feedback:1 dimension:4 world:5 cumulative:1 ignores:1 commonly:2 collection:1 boredom:1 historical:1 social:4 transaction:2 ranganath:1 observable:1 fip:6 keep:1 uai:1 recommending:2 xi:7 cullum:1 spectrum:1 continuous:3 latent:51 search:1 quantifies:1 nature:8 learn:5 career:1 obtaining:1 interact:6 du:3 expansion:1 elegantly:1 domain:1 linearly:1 big:1 profile:1 child:1 dyadic:1 categorized:1 x1:2 site:2 crafted:1 join:1 intel:1 georgia:1 fashion:1 fails:1 theme:1 guiding:1 explicit:3 exponential:5 konstan:1 stamp:1 third:2 posting:2 advertisement:1 grained:3 learns:1 ogata:2 minute:1 down:2 rk:3 friendship:1 specific:1 list:2 decay:1 concern:1 survival:3 effectively:1 magnitude:1 kx:1 trivedi:1 demand:1 chen:5 phan:1 smoothly:1 sophistication:1 simply:1 iik:5 kxk:2 sport:1 watch:3 recommendation:15 applies:1 springer:1 truth:1 lewis:2 willoughby:1 conditional:11 month:2 targeted:1 towards:2 shared:1 price:1 content:1 change:6 youtube:1 specifically:2 determined:5 averaging:10 total:3 tendency:1 svd:1 aalen:1 formally:1 college:1 people:1 kung:1 bigdata:1 evaluate:2 audio:1 phenomenon:3 srivastava:1 |
6,059 | 6,481 | Nested Mini-Batch K-Means
Franc?ois Fleuret
Idiap Research Institue & EPFL
francois.fleuret@idiap.ch
James Newling
Idiap Research Institue & EPFL
james.newling@idiap.ch
Abstract
A new algorithm is proposed which accelerates the mini-batch k-means algorithm
of Sculley (2010) by using the distance bounding approach of Elkan (2003). We
argue that, when incorporating distance bounds into a mini-batch algorithm, already used data should preferentially be reused. To this end we propose using
nested mini-batches, whereby data in a mini-batch at iteration t is automatically
reused at iteration t + 1.
Using nested mini-batches presents two difficulties. The first is that unbalanced
use of data can bias estimates, which we resolve by ensuring that each data sample
contributes exactly once to centroids. The second is in choosing mini-batch sizes,
which we address by balancing premature fine-tuning of centroids with redundancy induced slow-down. Experiments show that the resulting nmbatch algorithm is very effective, often arriving within 1% of the empirical minimum 100?
earlier than the standard mini-batch algorithm.
1
Introduction
The k-means problem is to find k centroids to minimise the mean distance between samples and
their nearest centroids. Specifically, given N training samples X = {x(1), . . . , x(N )} in vector
space V, one must find C = {c(1), . . . , c(k)} in V to minimise energy E defined by,
E(C) =
N
1 X
kx(i) ? c(a(i))k2 ,
N i=1
(1)
where a(i) = arg minj?{1,...,k} kx(i) ? c(j)k. In general the k-means problem is NP-hard, and so
a trade off must be made between low energy and low run time. The k-means problem arises in data
compression, classification, density estimation, and many other areas.
A popular algorithm for k-means is Lloyd?s algorithm, henceforth lloyd. It relies on a two-step
iterative refinement technique. In the assignment step, each sample is assigned to the cluster whose
centroid is nearest. In the update step, cluster centroids are updated in accordance with assigned
samples. lloyd is also referred to as the exact algorithm, which can lead to confusion as it does
not solve the k-means problem exactly. Similarly, approximate k-means algorithms often refer to
algorithms which perform an approximation in either the assignment or the update step of lloyd.
1.1
Previous works on accelerating the exact algorithm
Several approaches for accelerating lloyd have been proposed, where the required computation is
reduced without changing the final clustering. Hamerly (2010) shows that approaches relying on
triangle inequality based distance bounds (Phillips, 2002; Elkan, 2003; Hamerly, 2010) always provide greater speed-ups than those based on spatial data structures (Pelleg and Moore, 1999; Kanungo
et al., 2002). Improving bounding based methods remains an active area of research (Drake, 2013;
Ding et al., 2015). We discuss the bounding based approach in ? 2.1.
1
1.2
Previous approximate k-means algorithms
The assignment step of lloyd requires more computation than the update step. The majority of
approximate algorithms thus focus on relaxing the assignment step, in one of two ways. The first is
to assign all data approximately, so that centroids are updated using all data, but some samples may
be incorrectly assigned. This is the approach used in Wang et al. (2012) with cluster closures. The
second approach is to exactly assign a fraction of data at each iteration. This is the approach used in
Agarwal et al. (2005), where a representative core-set is clustered, and in Bottou and Bengio (1995),
and Sculley (2010), where random samples are drawn at each iteration. Using only a fraction of data
is effective in reducing redundancy induced slow-downs.
The mini-batch k-means algorithm of Sculley (2010), henceforth mbatch, proceeds as follows. Centroids are initialised as a random selection of k samples. Then at every iteration, b of N samples are
selected uniformly at random and assigned to clusters. Cluster centroids are updated as the mean
of all samples ever assigned to them, and are therefore running averages of assignments. Samples
randomly selected more often have more influence on centroids as they reappear more frequently in
running averages, although the law of large numbers smooths out any discrepancies in the long run.
mbatch is presented in greater detail in ? 2.2.
1.3
Our contribution
The underlying goal of this work is to accelerate mbatch by using triangle inequality based distance
bounds. In so doing, we hope to merge the complementary strengths of two powerful and widely
used approaches for accelerating lloyd.
The effective incorporation of bounds into mbatch requires a new sampling approach. To see this,
first note that bounding can only accelerate the processing of samples which have already been
visited, as the first visit is used to establish bounds. Next, note that the expected proportion of visits
during the first epoch which are revisits is at most 1/e, as shown in SM-A. Thus the majority of
visits are first time visits and hence cannot be accelerated by bounds. However, for highly redundant
datasets, mbatch often obtains satisfactory clustering in a single epoch, and so bounds need to be
effective during the first epoch if they are to contribute more than a minor speed-up.
To better harness bounds, one must preferentially reuse already visited samples. To this end, we
propose nested mini-batches. Specifically, letting Mt ? {1, . . . , N } be the mini-batch indices used
at iteration t ? 1, we enforce that Mt ? Mt+1 . One concern with nesting is that samples entering
in early iterations have more influence than samples entering at late iterations, thereby introducing
bias. To resolve this problem, we enforce that samples appear at most once in running averages.
Specifically, when a sample is revisited, its old assignment is first removed before it is reassigned.
The idea of nested mini-batches is discussed in ? 3.1.
The second challenge introduced by using nested mini-batches is determining the size of Mt . On
the one hand, if Mt grows too slowly, then one may suffer from premature fine-tuning. Specifically,
when updating centroids using Mt ? {1, . . . , N }, one is using energy estimated on samples indexed
by Mt as a proxy for energy over all N training samples. If Mt is small and the energy estimate
is poor, then minimising the energy estimate exactly is a waste of computation, as as soon as the
mini-batch is augmented the proxy energy loss function will change. On the other hand, if Mt
grows too rapidly, the problem of redundancy arises. Specifically, if centroid updates obtained with
a small fraction of Mt are similar to the updates obtained with Mt , then it is waste of computation
using Mt in its entirety. These ideas are pursued in ? 3.2.
2
2.1
Related works
Exact acceleration using the triangle inequality
The standard approach to perform the assignment step of lloyd requires k distance calculations.
The idea introduced in Elkan (2003) is to eliminate certain of these k calculations by maintaining
bounds on distances between samples and centroids. Several novel bounding based algorithms have
since been proposed, the most recent being the yinyang algorithm of Ding et al. (2015). A thorough
comparison of bounding based algorithms was presented in Drake (2013). We illustrate the basic
2
idea of Elkan (2003) in Alg. 1, where for every sample i, one maintains k lower bounds, l(i, j) for
j ? {1, . . . , k}, each bound satisfying l(i, j) ? kx(i) ? c(j)k. Before computing kx(i) ? c(j)k on
line 4 of Alg. 1, one checks that l(i, j) < d(i), where d(i) is the distance from sample i to the nearest
currently found centroid. If l(i, j) ? d(i) then kx(i) ? c(j)k ? d(i), and thus j can automatically
be eliminated as a nearest centroid candidate.
Algorithm 1 assignment-with-bounds(i)
1: d(i) ? kx(i) ? c(a(i))k
. where d(i) is distance to nearest centroid found so far
2: for all j ? {1, . . . , k} \ {a(i)} do
3:
if l(i, j) < d(i) then
4:
l(i, j) ? kx(i) ? c(j)k . make lower bound on distance between x(i) and c(j) tight
5:
if l(i, j) < d(i) then
6:
a(i) = j
7:
d(i) = l(i, j)
8:
end if
9:
end if
10: end for
The fully-fledged algorithm of Elkan (2003) uses additional tests to the one shown in Alg. 1, and
includes upper bounds and inter-centroid distances. The most recently published bounding based algorithm, yinyang of Ding et al. (2015), is like that of Elkan (2003) but does not maintain bounds on
all k distances to centroids, rather it maintains lower bounds on groups of centroids simultaneously.
To maintain the validity of bounds, after each centroid update one performs l(i, j) ? l(i, j) ? p(j),
where p(j) is the distance moved by centroid j during the centroid update, the validity of this
correction follows from the triangle inequality. Lower bounds are initialised as exact distances in the
first iteration, and only in subsequent iterations can bounds help in eliminating distance calculations.
Therefore, the algorithm of Elkan (2003) and its derivatives are all at least as slow as lloyd during
the first iteration.
2.2
Mini-batch k-means
The work of Sculley (2010) introduces mbatch, presented in Alg. 4, as a scalable alternative to
lloyd. Reusing notation, we let the mini-batch size be b, and the total number of assignments ever
made to cluster j be v(j). Let S(j) be the cumulative sum of data samples assigned to cluster j.
The centroid update, line 9 of Alg. 4, is then c(j) ? S(j)/v(j). Sculley (2010) present mbatch in
the context sparse datasets, and at the end of each round an l1 -sparsification operation is performed
to encourage sparsity. In this paper we are interested in mbatch in a more general context and do
not consider sparsification.
Algorithm 2 initialise-c-S-v
for j ? {1, . . . , k} do
c(j) ? x(i) for some i ? {1, . . . , N }
S(j) ? x(i)
v(j) ? 1
end for
3
Algorithm 3 accumulate(i)
S(a(i)) ? S(a(i)) + x(i)
v(a(i)) ? v(a(i)) + 1
Nested mini-batch k-means : nmbatch
The bottleneck of mbatch is the assignment step, on line 5 of Alg. 4, which requires k distance
calculations per sample. The underlying motivation of this paper is to reduce the number of distance
calculations at assignment by using distance bounds. However, as already discussed in ? 1.3, simply
wrapping line 5 in a bound test would not result in much gain, as only a minority of visited samples
would benefit from bounds in the first epoch. For this reason, we will replace random mini-batches
at line 3 of Alg. 4 by nested mini-batches. This modification motivates a change to the running
average centroid updates, discussed in Section 3.1. It also introduces the need for a scheme to
3
Algorithm 4 mbatch
1: initialise-c-S-v()
2: while convergence criterion not satisfied do
3:
M ? uniform random sample of size b from {1, . . . , N }
4:
for all i ? M do
5:
a(i) ? arg minj?{1,...,k} kx(i) ? c(j)k
6:
accumulate(i)
7:
end for
8:
for all j ? {1, . . . , k} do
9:
c(j) ? S(j)/v(j)
10:
end for
11: end while
choose mini-batch sizes, discussed in 3.2. The resulting algorithm, which we refer to as nmbatch,
is presented in Alg. 5.
There is no random sampling in nmbatch, although an initial random shuffling of samples can be
performed to remove any ordering that may exist. Let bt be the size of the mini-batch at iteration
t, that is bt = |Mt |. We simply take Mt to be the first bt indices, that is Mt = {1, . . . , bt }.
Thus Mt ? Mt+1 corresponds to bt ? bt+1 . Let T be the number of iterations of nmbatch
before terminating. We use as stopping criterion that no assignments change on the full training set,
although this is not important and can be modified.
3.1
One sample, one vote : modifying cumulative sums to prevent duplicity
In mbatch, a sample used n times makes n contributions to one or more centroids, through line 6 of
Alg. 4. Due to the extreme and systematic difference in the number of times samples are used with
nested mini-batches, it is necessary to curtail any potential bias that duplicitous contribution may
incur. To this end, we only alow a sample?s most recent assignment to contribute to centroids. This
is done by removing old assignments before samples are reused, shown on lines 15 and 16 of Alg. 5.
3.2
Finding the sweet spot : balancing premature fine-tuning with redundancy
We now discuss how to sensibly select mini-batch size bt , where recall that the sample indices of the
mini-batch at iteration t are Mt = {1, . . . , bt }. The only constraint imposed so far is that bt ? bt+1
for t ? {1, . . . , T ? 1}, that is that bt does not decrease. We consider two extreme schemes to
illustrate the importance of finding a scheme where bt grows neither too rapidly nor too slowly.
The first extreme scheme is bt = N for t ? {1, . . . , T }. This is just a return to full batch k-means,
and thus redundancy is a problem, particularly at early iterations. The second extreme scheme,
where Mt grows very slowly, is the following: if any assignment changes at iteration t, then bt+1 =
bt , otherwise bt+1 = bt + 1. The problem with this second scheme is that computation may be
wasted in finding centroids which accurately minimise the energy estimated on unrepresentative
subsets of the full training set. This is what we refer to as premature fine-tuning.
To develop a scheme which balances redundancy and premature fine-tuning, we need to find sensible
definitions for these terms. A first attempt might be to define them in terms of energy (1), as this is
ultimately what we wish to minimise. Redundancy would correspond to a slow decrease in energy
caused by long iteration times, and premature fine-tuning would correspond to approaching a local
minimum of a poor proxy for (1). A difficulty with an energy based approach is that we do not want
to compute (1) at each iteration and there is no clear way to quantify the underestimation of (1) using
a mini-batch. We instead consider definitions based on centroid statistics.
3.2.1
Balancing intra-cluster standard deviation with centroid displacement
Let ct (j) denote centroid j at iteration t, and let ct+1 (j|b) be ct+1 (j) when Mt+1 = {1, . . . , b},
so that ct+1 (j|b) is the update to ct (j) using samples {x(1), . . . , x(b)}. Consider two options,
4
Algorithm 5 nmbatch
1: t = 1
. Iteration number
2: M0 ? {}
3: M1 ? {1, . . . , bs }
. Indices of samples in current mini-batch
4: initialise-c-S-v()
5: for j ? {1, . . . , k} do
6:
sse(j) ? 0
. Initialise sum of squares of samples in cluster j
7: end for
8: while stop condition is false do
9:
for i ? Mt?1 and j ? {1, . . . , k} do
10:
l(i, j) ? l(i, j) ? p(j)
. Update bounds of reused samples
11:
end for
12:
for i ? Mt?1 do
13:
aold (i) ? a(i)
14:
sse(aold (i)) ? sse(aold (i)) ? d(i)2
. Remove expired sse, S and v contributions
15:
S(aold (i)) ? S(aold (i)) ? x(i)
16:
v(aold (i)) ? v(aold (i)) ? 1
17:
assignment-with-bounds(i)
. Reset assignment a(i)
18:
accumulate(i)
19:
sse(a(i)) ? sse(a(i)) + d(i)2
20:
end for
21:
for i ? Mt \ Mt?1 and j ? {1, . . . , k} do
22:
l(i, j) ? kx(i) ? c(j)k
. Tight initialisation for new samples
23:
end for
24:
for i ? Mt \ Mt?1 do
25:
a(i) ? arg minj?{1,...,k} l(i, j)
26:
d(i) ? l(i, a(i))
27:
accumulate(i)
28:
sse(a(i)) ? sse(a(i)) + d(i)2
29:
end for
30:
for j ? {1, . . .p
, k} do
31:
?
?C (j) ? (sse(j))/ (v(j)(v(j) ? 1))
32:
cold (j) ? c(j)
33:
c(j) ? S(j)/v(j)
34:
p(j) ? kc(j) ? cold (j)k
35:
end for
36:
if minj?{1,...,k} (?
?c (j)/p(j)) > ? then
. Check doubling condition
37:
Mt+1 ? {1, . . . , min (2|Mt |, N )}
38:
else
39:
Mt+1 ? Mt
40:
end if
41:
t?t+1
42: end while
bt+1 = bt with resulting update ct+1 (j|bt ), and bt+1 = 2bt with update ct+1 (j|2bt ). If,
kct+1 (j|2bt ) ? ct+1 (j|bt )k kct (j) ? ct+1 (j|bt )k,
(2)
then it makes little difference if centroid j is updated with bt+1 = bt or bt+1 = 2bt , as illustrated in
Figure 1, left. Using bt+1 = 2bt would therefore be redundant. If on the other hand,
kct+1 (j|2bt ) ? ct+1 (j|bt )k kct (j) ? ct+1 (j|bt )k,
(3)
this suggests premature fine-tuning, as illustrated in Figure 1, right. Balancing redundancy and
premature fine-tuning thus equates to balancing the terms on the left and right hand sides of (2)
and (3). Let us denote by Mt (j) the indices of samples in Mt assigned to cluster j. In SM-B we
show that the term on the left hand side of (2) and (3) can be estimated by 21 ?
?C (j), where
X
1
2
?
?C
(j) =
kx(i) ? ct (j)k2 .
(4)
|Mt (j)|2
i?Mt (j)
5
?
ct (j)
ct+1 (j|bt )
?
?
ct+1 (j|2bt )
? ct+1 (j|bt )
?
ct (j)
ct+1 (j|2bt )
?
Figure 1: Centroid based definitions of redundancy and premature fine-tuning. Starting from centroid ct (j), the update can be performed with a mini-batch of size bt or 2bt . On the left, it makes
little difference and so using all 2bt points would be redundant. On the right, using 2bt samples
results in a much larger change to the centroid, suggesting that ct (j) is near to a local minimum of
energy computed on bt points, corresponding to premature fine-tuning.
?
?C (j) may underestimate kct+1 (j|2bt ) ? ct+1 (j|bt )k as samples {x(bt+1 ), . . . , x(2bt )} have not
been used by centroids at iteration t, however our goal here is to establish dimensional homogeneity.
The right hand sides of (2) and (3) can be estimated by the distance moved by centroid j in the
preceding iteration, which we denote by p(j). Balancing redundancy and premature fine-tuning
thus equates to preventing ?
?C (j)/p(j) from getting too large or too small.
It may be that ?
?C (j)/p(j) differs significantly between clusters j. It is not possible to independently
control the number of samples per cluster, and so a joint decision needs to be made by clusters as to
whether or not to increase bt . We choose to make the decision based on the minimum ratio, on line
37 of Alg. 5, as premature fine-tuning is less costly when performed on a small mini-batch, and so
it makes sense to allow slowly converging centroids to catch-up with rapidly converging ones.
The decision to use a double-or-nothing scheme for growing the mini-batch is motivated by the fact
that ?
?C (j) drops by a constant factor when the mini-batch doubles in size. A linearly increasing
mini-batch would be prone to premature fine-tuning as the mini-batch would not be able to grow
rapidly enough.
Starting with an initial mini-batch size b0 , nmbatch iterates until minj ?
?C (j)/p(j) is above some
threshold ?, at which point mini-batch size increases as bt ? min(2bt , N ), shown on line 37 of
Alg. 5. The mini-batch size is guaranteed to eventually reach N , as p(j) eventually goes to zero.
The doubling threshold ? reflects the relative costs of premature fine-tuning and redundancy.
3.3
A note on parallelisation
The parallelisation of nmbatch can be done in the same way as in mbatch, whereby a mini-batch
is simply split into sub-mini-batches to be distributed. For mbatch, the only constraint on submini-batches is that they are of equal size to guarantee equal processing times. With nmbatch the
constraint is slightly stricter, as the time required to process a sample depends on its time of entry into
the mini-batch, due to bounds. Samples from all iterations should thus be balanced, the constraint
becoming that each sub-mini-batch contains an equal number of samples from Mt \ Mt?1 for all t.
4
Results
We have performed experiments on 3 dense datasets and sparse dataset used in Sculley (2010). The
INFMNIST dataset (Loosli et al., 2007) is an extension of MNIST, consisting of 28?28 hand-written
digits (d = 784). We use 400,000 such digits for performing k-means and 40,000 for computing a
validation energy EV . STL10P (Coates et al., 2011) consists of 6?6?3 image patches (d = 108), we
train with 960,000 patches and use 40,000 for validation. KDDC98 contains 75,000 training samples
and 20,000 validation samples, in 310 dimensions. Finally, the sparse RCV1 dataset of Lewis et al.
(2004) consists of data in 47,237 dimensions, with two partitions containing 781,265 and 23,149
samples respectively. As done in Sculley (2010), we use the larger partition to learn clusters.
The experimental setup used on each of the datasets is the following: for 20 random seeds, the
training dataset is shuffled and the first k datapoints are taken as initialising centroids. Then, for
each of the algorithms, k-means is run on the shuffled training set. At regular intervals, a validation
energy EV is computed on the validation set. The time taken to compute EV is not included in run
times. The batchsize for mbatch and initial batchsize for nmbatch are 5, 000, and k = 50 clusters.
6
0.10
0.06
KDDC98
0.05
lloyd
yinyang
mbatch
nmbatch
(EV ? E ? ) /E ?
0.08
0.06
INFMNIST
0.04
0.03
0.04
0.02
0.02
0.01
0.00 ?1
10
0.030
100
101
102
103
104
RCV1
0.00 ?1
10
0.12
0.10
0.020
0.08
0.015
0.06
0.010
0.04
0.005
0.02
(EV ? E ? ) /E ?
0.025
0.000 ?1
10
100
101
102
time [s]
103
104
0.00 ?1
10
100
101
102
103
104
STL10P
100
101
102
time [s]
103
104
Figure 2: The mean energy on validation data (EV ) relative to lowest energy (E ? ) across 20 runs
with standard deviations. Baselines are lloyd, yinyang, and mbatch, shown with the new algorithm nmbatch with ? = 100. We see that nmbatch is consistently faster than all baselines, and
obtains final minima very similar to those obtained by the exact algorithms. On the sparse dataset
RCV1, the speed-up is noticeable within 0.5% of the empirical minimum E ? . On the three dense
datasets, the speed-up over mbatch is between 10? and 100? at 2% of E ? , with even greater
speed-ups below 2% where nmbatch converges very quickly to local minima.
0.05
(EV ? E ? )/E ?
0.04
KDDC98
INFMNIST
t = 2s (active)
t = 10s (actve)
t = 2s (deactive)
t = 10s (deactive)
RCV1
STL10P
0.03
0.02
0.01
0.00 ?1
10
100
101
?
102
103
100
101
?
102
103
100
101
?
102
103
100
101
?
102
103
Figure 3: Relative errors on validation data at t ? {2, 10}, for nmbatch with and with bound tests,
for ? ? {10?1 , 100 , 101 , 102 , 103 }. In the standard case of active bound testing, large values of ?
work well, as premature fine-tuning is less of a concern. However with the bound test deactivated,
premature fine-tuning becomes costly for large ?, and an optimal ? value is one which trades off
redundancy (? too small) and premature fine-tuning (? too large).
The mean and standard deviation of EV over the 20 runs are computed, and this is what is plotted
in Figure 2, relative to the lowest obtained validation energy over all runs on a dataset, E ? . Before
comparing algorithms, we note that our implementation of the baseline mbatch is competitive with
existing implementations, as shown in Appendix A.
7
In Figure 2, we plot time-energy curves for nmbatch with three baselines. We use ? = 100, as
described in the following paragraph. On the 3 dense datasets, we see that nmbatch is much faster
than mbatch, obtaining a solution within 2% of E ? between 10? and 100? earlier than mbatch.
On the sparse dataset RCV1, the speed-up becomes noticeable within 0.5% of E ? . Note that in a
single epoch nmbatch gets very near to E ? , whereas the full batch algorithms lloyd and yinyang
only complete one iteration. The mean final energies of nmbatch and the exact algorithms are
consistently within one initialisation standard deviation. This means that the random initialisation
seed has a larger impact on final energy than the choose between nmbatch and an exact algorithm.
We now discuss the choice of ?. Recall that the mini-batch size doubles when minj ?
?C (j)/p(j) > ?.
Thus a large ? means smaller p(j)s are needed to invoke a doubling, which means less robustness
against premature fine-tuning. The relative costs of premature fine-tuning and redundancy are influenced by the use of bounds. Consider the case of premature fine-tuning with bounds. p(j) becomes
small, and thus bound tests become more effective as they decrease more slowly (line 10 of Alg. 5).
Thus, while premature fine-tuning does result in more samples being visited than necessary, each
visit is processed rapidly and so is less costly. We have found that taking ? to be large works well for
nmbatch. Indeed, there is little difference in performance for ? ? {10, 100, 1000}. To test that our
formulation is sensible, we performed tests with the bound test (line 3 of Alg. 1) deactivated. When
deactivated, ? = 10 is in general better than larger values of ?, as seen in Figure 3. Full time-energy
curves for different ? values are provided in SM-C.
5
Conclusion and future work
We have shown how triangle inequality based bounding can be used to accelerate mini-batch kmeans. The key is the use of nested batches, which enables rapid processing of already used samples.
The idea of replacing uniformly sampled mini-batches with nested mini-batches is quite general,
and applicable to other mini-batch algorithms. In particular, we believe that the sparse dictionary
learning algorithm of Mairal et al. (2009) could benefit from nesting. One could also consider
adapting nested mini-batches to stochastic gradient descent, although this is more speculative.
Celebi et al. (2013) show that specialised initialisation schemes such as k-means++ can result in
better clusterings. While this is not the case for the datasets we have used, it would be interesting to
consider adapting such initialisation schemes to the mini-batch context.
Our nested mini-batch algorithm nmbatch uses a very simple bounding scheme. We believe that
further improvements could be obtained through more advanced bounding, and that the memory
footprint of O(KN ) could be reduced by using a scheme where, as the mini-batch grows, the number of bounds maintained decreases, so that bounds on groups of clusters merge.
A
Comparing Baseline Implementations
We compare our implementation of mbatch with two publicly available implementations, that accompanying Sculley (2010) in C++, and that in scikit-learn Pedregosa et al. (2011), written in
Cython. Comparisons are presented in Table 1, where our implementations are seen to be competitive. Experiments were all single threaded. Our C++ and Python code is available at https:
//github.com/idiap/eakmeans.
INFMNIST (dense)
ours
sklearn
12.4
20.6
RCV1 (sparse)
ours sklearn sofia
15.2
63.6
23.3
Table 1: Comparing implementations of mbatch on INFMNIST (left) and RCV1 (right). Time in
seconds to process N datapoints, where N = 400, 000 for INFMNIST and N = 781, 265 for RCV1.
Implementations are our own (ours), that in scikit-learn (sklearn), and that of Sculley (2010) (sofia).
Acknowledgments
James Newling was funded by the Hasler Foundation under the grant 13018 MASH2.
8
References
Agarwal, P. K., Har-Peled, S., and Varadarajan, K. R. (2005). Geometric approximation via coresets. In COMBINATORIAL AND COMPUTATIONAL GEOMETRY, MSRI, pages 1?30. University
Press.
Bottou, L. and Bengio, Y. (1995). Convergence properties of the K-means algorithm. pages 585?
592.
Celebi, M. E., Kingravi, H. A., and Vela, P. A. (2013). A comparative study of efficient initialization
methods for the k-means clustering algorithm. Expert Syst. Appl., 40(1):200?210.
Coates, A., Lee, H., and Ng, A. (2011). An analysis of single-layer networks in unsupervised feature
learning. In Gordon, G., Dunson, D., and Dudk, M., editors, Proceedings of the Fourteenth
International Conference on Artificial Intelligence and Statistics, volume 15 of JMLR Workshop
and Conference Proceedings, pages 215?223. JMLR W&CP.
Ding, Y., Zhao, Y., Shen, X., Musuvathi, M., and Mytkowicz, T. (2015). Yinyang k-means: A
drop-in replacement of the classic k-means with consistent speedup. In Proceedings of the 32nd
International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages
579?587.
Drake, J. (2013). Faster k-means clustering. Accessed online 19 August 2015.
Elkan, C. (2003). Using the triangle inequality to accelerate k-means. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA, pages 147?153.
Hamerly, G. (2010). Making k-means even faster. In SDM, pages 130?140.
Kanungo, T., Mount, D., Netanyahu, N., Piatko, C., Silverman, R., and Wu, A. (2002). An efficient k-means clustering algorithm: analysis and implementation. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 24(7):881?892.
Lewis, D. D., Yang, Y., Rose, T. G., and Li, F. (2004). Rcv1: A new benchmark collection for text
categorization research. JOURNAL OF MACHINE LEARNING RESEARCH, 5:361?397.
Loosli, G., Canu, S., and Bottou, L. (2007). Training invariant support vector machines using
selective sampling. In Bottou, L., Chapelle, O., DeCoste, D., and Weston, J., editors, Large Scale
Kernel Machines, pages 301?320. MIT Press, Cambridge, MA.
Mairal, J., Bach, F., Ponce, J., and Sapiro, G. (2009). Online dictionary learning for sparse coding.
In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ?09,
pages 689?696, New York, NY, USA. ACM.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M.,
Perrot, M., and Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of
Machine Learning Research, 12:2825?2830.
Pelleg, D. and Moore, A. (1999). Accelerating exact k-means algorithms with geometric reasoning.
In Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, KDD ?99, pages 277?281, New York, NY, USA. ACM.
Phillips, S. (2002). Acceleration of k-means and related clustering algorithms. volume 2409 of
Lecture Notes in Computer Science. Springer.
Sculley, D. (2010). Web-scale k-means clustering. In Proceedings of the 19th International Conference on World Wide Web, WWW ?10, pages 1177?1178, New York, NY, USA. ACM.
Wang, J., Wang, J., Ke, Q., Zeng, G., and Li, S. (2012). Fast approximate k-means via cluster
closures. In CVPR, pages 3037?3044. IEEE Computer Society.
9
| 6481 |@word eliminating:1 compression:1 proportion:1 nd:1 reused:4 closure:2 curtail:1 thereby:1 initial:3 contains:2 kingravi:1 initialisation:5 ours:3 dubourg:1 existing:1 current:1 comparing:3 com:1 must:3 written:2 subsequent:1 partition:2 kdd:1 enables:1 remove:2 drop:2 plot:1 update:14 pursued:1 selected:2 intelligence:2 core:1 iterates:1 contribute:2 revisited:1 accessed:1 become:1 consists:2 paragraph:1 blondel:1 sklearn:3 inter:1 indeed:1 expected:1 rapid:1 frequently:1 nor:1 growing:1 relying:1 automatically:2 resolve:2 little:3 decoste:1 increasing:1 becomes:3 provided:1 underlying:2 notation:1 lowest:2 what:3 finding:3 sparsification:2 guarantee:1 sapiro:1 thorough:1 every:2 stricter:1 exactly:4 sensibly:1 k2:2 control:1 grant:1 appear:1 before:5 accordance:1 local:3 mount:1 becoming:1 approximately:1 merge:2 might:1 initialization:1 suggests:1 relaxing:1 appl:1 acknowledgment:1 testing:1 hamerly:3 piatko:1 differs:1 reappear:1 silverman:1 spot:1 cold:2 digit:2 displacement:1 footprint:1 area:2 empirical:2 significantly:1 adapting:2 ups:2 regular:1 varadarajan:1 get:1 cannot:1 selection:1 context:3 influence:2 www:1 imposed:1 go:1 starting:2 independently:1 shen:1 ke:1 nesting:2 datapoints:2 initialise:4 classic:1 sse:9 updated:4 exact:8 us:2 elkan:8 satisfying:1 particularly:1 updating:1 loosli:2 ding:4 wang:3 ordering:1 trade:2 removed:1 decrease:4 balanced:1 rose:1 peled:1 ultimately:1 terminating:1 tight:2 passos:1 incur:1 triangle:6 accelerate:4 joint:1 train:1 fast:1 effective:5 artificial:1 choosing:1 whose:1 quite:1 widely:1 solve:1 larger:4 cvpr:1 otherwise:1 statistic:2 final:4 online:2 sdm:1 propose:2 reset:1 rapidly:5 moved:2 getting:1 convergence:2 cluster:17 double:3 francois:1 comparative:1 categorization:1 converges:1 help:1 illustrate:2 develop:1 nearest:5 minor:1 noticeable:2 b0:1 ois:1 idiap:5 entirety:1 quantify:1 modifying:1 stochastic:1 assign:2 clustered:1 varoquaux:1 extension:1 correction:1 batchsize:2 accompanying:1 seed:2 m0:1 dictionary:2 early:2 estimation:1 applicable:1 combinatorial:1 currently:1 visited:4 vela:1 prettenhofer:1 reflects:1 hope:1 mit:1 always:1 modified:1 rather:1 focus:1 ponce:1 improvement:1 consistently:2 check:2 grisel:1 sigkdd:1 centroid:38 baseline:5 sense:1 stopping:1 epfl:2 eliminate:1 bt:51 kc:1 selective:1 france:1 interested:1 arg:3 classification:1 spatial:1 drake:3 gramfort:1 equal:3 once:2 ng:1 sampling:3 eliminated:1 washington:1 lille:1 unsupervised:1 icml:3 discrepancy:1 future:1 np:1 gordon:1 sweet:1 franc:1 randomly:1 simultaneously:1 homogeneity:1 geometry:1 consisting:1 alow:1 replacement:1 maintain:2 attempt:1 newling:3 highly:1 cournapeau:1 intra:1 mining:1 introduces:2 cython:1 extreme:4 har:1 encourage:1 necessary:2 indexed:1 old:2 plotted:1 earlier:2 assignment:17 cost:2 introducing:1 deviation:4 subset:1 entry:1 uniform:1 too:8 kn:1 density:1 international:6 systematic:1 off:2 invoke:1 lee:1 quickly:1 satisfied:1 containing:1 choose:3 slowly:5 henceforth:2 expert:1 derivative:1 zhao:1 return:1 michel:1 reusing:1 syst:1 suggesting:1 potential:1 li:2 lloyd:13 waste:2 includes:1 coresets:1 coding:1 caused:1 depends:1 performed:6 doing:1 competitive:2 maintains:2 option:1 contribution:4 square:1 publicly:1 correspond:2 accurately:1 published:1 minj:6 reach:1 influenced:1 definition:3 against:1 underestimate:1 energy:21 initialised:2 kct:5 james:3 gain:1 stop:1 dataset:7 sampled:1 popular:1 recall:2 knowledge:1 harness:1 wei:1 formulation:1 done:3 just:1 until:1 hand:7 web:2 replacing:1 zeng:1 scikit:3 grows:5 believe:2 usa:4 validity:2 hence:1 assigned:7 shuffled:2 entering:2 moore:2 satisfactory:1 illustrated:2 round:1 during:4 maintained:1 whereby:2 criterion:2 complete:1 confusion:1 performs:1 l1:1 parallelisation:2 cp:1 reasoning:1 image:1 novel:1 recently:1 speculative:1 mt:36 volume:2 discussed:4 m1:1 accumulate:4 refer:3 cambridge:1 phillips:2 shuffling:1 tuning:21 canu:1 similarly:1 funded:1 chapelle:1 own:1 recent:2 certain:1 inequality:6 seen:2 minimum:7 greater:3 additional:1 preceding:1 redundant:3 july:1 full:5 smooth:1 faster:4 calculation:5 minimising:1 long:2 bach:1 visit:5 ensuring:1 converging:2 scalable:1 basic:1 impact:1 iteration:24 kernel:1 agarwal:2 whereas:1 want:1 fine:21 interval:1 else:1 grow:1 induced:2 near:2 yang:1 bengio:2 enough:1 split:1 approaching:1 reduce:1 idea:5 minimise:4 bottleneck:1 whether:1 motivated:1 accelerating:4 reuse:1 suffer:1 york:3 fleuret:2 clear:1 kanungo:2 processed:1 reduced:2 http:1 exist:1 coates:2 estimated:4 msri:1 per:2 brucher:1 group:2 redundancy:13 key:1 threshold:2 drawn:1 changing:1 prevent:1 neither:1 hasler:1 wasted:1 pelleg:2 fraction:3 sum:3 run:7 fourteenth:1 powerful:1 wu:1 patch:2 decision:3 appendix:1 initialising:1 accelerates:1 bound:34 ct:21 layer:1 guaranteed:1 annual:1 institue:2 strength:1 incorporation:1 constraint:4 speed:6 min:2 performing:1 rcv1:9 speedup:1 poor:2 across:1 slightly:1 smaller:1 modification:1 b:1 making:1 aold:7 invariant:1 taken:2 remains:1 discus:3 eventually:2 thirion:1 needed:1 letting:1 end:19 available:2 operation:1 enforce:2 batch:52 alternative:1 robustness:1 clustering:8 running:4 maintaining:1 establish:2 society:1 perrot:1 already:5 wrapping:1 costly:3 gradient:1 distance:19 majority:2 sensible:2 argue:1 threaded:1 reason:1 minority:1 code:1 index:5 mini:48 ratio:1 balance:1 preferentially:2 setup:1 dunson:1 implementation:9 motivates:1 perform:2 upper:1 datasets:7 sm:3 benchmark:1 descent:1 incorrectly:1 ever:2 dc:1 august:2 introduced:2 required:2 vanderplas:1 address:1 able:1 celebi:2 proceeds:1 below:1 pattern:1 ev:8 sparsity:1 challenge:1 memory:1 deactivated:3 difficulty:2 advanced:1 scheme:12 github:1 catch:1 text:1 epoch:5 geometric:2 discovery:1 python:2 determining:1 relative:5 law:1 loss:1 fully:1 lecture:1 interesting:1 validation:8 foundation:1 proxy:3 consistent:1 expired:1 editor:2 netanyahu:1 balancing:6 prone:1 soon:1 arriving:1 bias:3 side:3 allow:1 fledged:1 wide:1 taking:1 fifth:1 sparse:8 benefit:2 distributed:1 curve:2 dimension:2 world:1 cumulative:2 preventing:1 made:3 refinement:1 collection:1 premature:21 far:2 transaction:1 approximate:4 obtains:2 active:3 mairal:2 iterative:1 table:2 reassigned:1 learn:4 obtaining:1 contributes:1 improving:1 alg:14 bottou:4 dense:4 linearly:1 bounding:10 revisits:1 motivation:1 sofia:2 nothing:1 complementary:1 augmented:1 referred:1 representative:1 slow:4 ny:3 sub:2 duchesnay:1 wish:1 candidate:1 jmlr:2 late:1 down:2 removing:1 concern:2 incorporating:1 workshop:1 mnist:1 false:1 importance:1 equates:2 kx:10 specialised:1 simply:3 twentieth:1 doubling:3 springer:1 ch:2 nested:13 corresponds:1 relies:1 lewis:2 ma:1 acm:4 weston:1 goal:2 kmeans:1 acceleration:2 sculley:10 unrepresentative:1 replace:1 hard:1 change:5 included:1 specifically:5 reducing:1 uniformly:2 total:1 experimental:1 vote:1 underestimation:1 pedregosa:2 select:1 support:1 arises:2 unbalanced:1 accelerated:1 |
6,060 | 6,482 | Blind Attacks on Machine Learners
Alex Beatson
Department of Computer Science
Princeton University
abeatson@princeton.edu
Zhaoran Wang
Department of Operations Research
and Financial Engineering
Princeton University
zhaoran@princeton.edu
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University
hanliu@princeton.edu
Abstract
The importance of studying the robustness of learners to malicious data is well
established. While much work has been done establishing both robust estimators
and effective data injection attacks when the attacker is omniscient, the ability of
an attacker to provably harm learning while having access to little information is
largely unstudied. We study the potential of a ?blind attacker? to provably limit
a learner?s performance by data injection attack without observing the learner?s
training set or any parameter of the distribution from which it is drawn. We provide
examples of simple yet effective attacks in two settings: firstly, where an ?informed
learner? knows the strategy chosen by the attacker, and secondly, where a ?blind
learner? knows only the proportion of malicious data and some family to which the
malicious distribution chosen by the attacker belongs. For each attack, we analyze
minimax rates of convergence and establish lower bounds on the learner?s minimax
risk, exhibiting limits on a learner?s ability to learn under data injection attack even
when the attacker is ?blind?.
1
Introduction
As machine learning becomes more widely adopted in security and in security-sensitive tasks, it is
important to consider what happens when some aspect of the learning process or the training data
is compromised [1?4]. Examples in network security are common and include tasks such as spam
filtering [5, 6] and network intrusion detection [7, 8]; examples outside the realm of network security
include statistical fraud detection [9] and link prediction using social network data or communications
metadata for crime science and counterterrorism [10].
In a training set attack, an attacker either adds adversarial data points to the training set (?data
injection?) or preturbs some of the points in the dataset so as to influence the concept learned by the
learner, often with the aim of maximizing the learner?s risk. Training-set data injection attacks are
one of the most practical means by which an attacker can influence learning, as in many settings an
attacker which does not have insider access to the learner or its data collection or storage systems
may still be able to carry out some activity which is monitored and the resulting data used in the
learner?s training set [2, 6]. In a network security setting, an attacker might inject data into the training
set for an anomaly detection system so that malicious traffic is classified as normal, thus making a
network vulnerable to attack, or so that normal traffic is classified as malicious, thus harming network
operation.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
A growing body of research focuses on game-theoretic approaches to the security of machine learning,
analyzing both the ability of attackers to harm learning and effective strategies for learners to defend
against attacks. This work often makes strong assumptions about the knowledge of the attacker. In a
single-round game it is usually assumed that the attacker knows the algorithm used by the learner
(e.g. SVM or PCA) and has knowledge of the training set either by observing the training data or
the data-generating distribution [2, 5, 11]. This allows the construction of an optimal attack to be
treated as an optimization problem. However, this assumption is often unrealistic as it requires insider
knowledge of the learner or for the attacker to solve the same estimation problem the learner faces
to identify the data-generating distribution. In an iterated-game setting it is usually assumed the
attacker can query the learner and is thus able to estimate the learner?s current hypothesis in each
round [12?14]. This assumption is reasonable in some settings, but in other scenarios the attacker
may not receive immediate feedback from the learner, making the iterated-game setting inappropriate.
We provide analysis which makes weaker assumptions than either of these bodies of work by taking
a probabilistic approach in tackling the setting where a ?blind attacker? has no knowledge of the
training set, the learner?s algorithm or the learner?s hypothesis.
Another motivation is provided by the field of privacy. Much work in the field of statistical privacy
concerns disclosure risk: the probability that an entry in a dataset might be identified given statistics
of the dataset released. This has been formalized by ?differential privacy?, which provides bounds
on the maximum disclosure risk [15]. However, differential privacy hinges on the benevolence of
an organization to which you give your data: the privacy of individuals is preserved as long as
organizations which collect and analyze data take necessary steps to enforce differential privacy.
Many data are gathered without users? deliberate consent or even knowledge. Organizations are also
not yet under legal obligation to use differentially-private procedures.
A user might wish to take action to preserve their own privacy without making any assumption of
benevolence on the part of those that collect data arising from the user?s actions. For example, they
may wish to prevent an online service from accurately estimating their income, ethnicity, or medical
history. The user may have to submit some quantity of genuine data in order to gain a result from the
service which addresses a specific query, and may not even observe all the data the service collects.
They may wish to enforce the privacy of their information by also submitting fabricated data to
the service or carrying out uncharacteristic activity. This is a data injection training set attack, and
studying such attacks thus reveals the ability of a user to prevent a statistician or learner from making
inferences from the user?s behavior.
In this paper we address the problem of a one-shot data injection attack carried out by a blind attacker
who does not observe the training set, the true distribution of interest, or the learner?s algorithm. We
approach this problem from the perspective of minimax decision theory to provide an analysis of
the rate of convergence of estimators on training sets subject to such attacks. We consider both an
?informed learner? setting where the learner is aware of the exact distribution used by the attacker
to inject malicious data, and a ?blind learner? setting where the learner is unaware of the malicious
distribution. In both settings we suggest attacks which aim to minimize an upper bound on the
pairwise KL divergences between the distributions conditioned on particular hypotheses, and thus
maximize a lower bound on the minimax risk of the learner. We provide lower bounds on the rate of
convergence of any estimator under these attacks.
2
2.1
Setting and contributions
Setting
A learner attempts to learn some parameter ? of a distribution of interest F? with density f? and
belonging to some family F = {F? , ? ? ?}, where ? is a set of candidate hypotheses for the
parameter. ?Uncorrupted? data X1 , ..., Xn ? X are drawn i.i.d. from F? . The attacker chooses some
malicious distribution G? with density g? and from a family G = {G? : ? ? ?}, where ? is a
parameter set representing candidate attack strategies. ?Malicious? data X10 , .., Xn0 ? X are drawn
i.i.d from the malicious distribution. The observed dataset is made up of a fraction ? of true examples
and 1 ? ? of malicious examples. The learner observes a dataset Z1 , ..., Zn ? Z, where
Xi with probability ?
Zi =
(1)
Xi0 with probability 1 ? ?.
2
We denote the distribution of Z with P . P is clearly a mixture distribution with density:
p(z) = ?f? (z) + (1 ? ?)g? (z).
The distribution of Z conditional on X is:
p(z|x) = ?1{z = x} + (1 ? ?)g? (z).
We consider two distinct settings based on the knowledge of the attacker and of the learner. First
we consider the scenario where the learner knows the malicious distribution, G? and the fraction of
inserted examples (?informed learner?). Second we consider the scenario where the learner knows
only the family G to which G? belongs and fraction of inserted examples (?blind learner?). Our work
assumes that the attacker knows only the family of distributions F to which the true distribution
belongs (?blind attacker?). As such, the attacker designs an attack so as to maximally lower bound
the learner?s minimax risk. We leave as future work a probabilistic treatment of the setting where
the attacker knows the true F? but not the training set drawn from it (?informed attacker?). To our
knowledge, our work is the first to consider the problem of learning in a setting where the training
data is distributed according to a mixture of a distribution of interest and a malicious distribution
chosen by an adversary without knowledge of the distribution of interest.
2.2
Related work
Our paper has very strong connections to several problems which have previously been studied in the
minimax framework.
First is the extensive literature on robust statistics. Our framework is very similar to Huber?s
-contamination model, where the observed data follows the distribution:
(1 ? )P? + Q.
Here controls the degree of corruption, Q is an arbitrary corruption distribution, and the learner
attempts to estimate ? robust to the contamination. A general estimator which achieves the minimax
optimal rate under Huber?s -contamination model was recently proposed by Chen, Gao and Ren[16].
Our work differs from the robust estimation literature in that rather than designing optimal estimators
for the learner, we provide concrete examples of attack strategies which harm the learning rate of any
estimator, even those which are optimal under Huber?s model. Unlike robust statistics, our attacker
does not have complete information on the generating distribution, and must select an attack which is
effective for any data-generating distribution drawn from some set. Our work has similar connections
to the literature on minimax rates of convergence of estimators for mixture models [17] and minimax
rates for mixed regression with multiple components [18], but differs in that we consider the problem
of designing a corrupting distribution.
There are also connections to the work on PAC learning with contaminated data [19]. Here the key
difference, beyond the fact that we focus on strategies for a blind attacker as discussed earlier, is that
we use information-theoretic proof techniques rather than reductions to computational hardness. This
means that our bounds restrict all learning algorithms, not just polynomial-time learning algorithms.
Our work has strong connections to the analysis of minimax lower bounds in local differential privacy.
In [20] and [21], Duchi, Wainwright and Jordan establish lower bounds in the local differential
privacy setting, where P (Zi |Xi = x), the likelihood of an observed data point Zi given Xi takes any
value x, is no more than some constant factor greater than P (Zi |Xi = x0 ), the likelihood of Zi given
Xi takes any other value x0 . Our work can be seen as an adaptation of those ideas to a new setting:
we perform very similar analysis but in a data injection attack setting rather than local differential
privacy setting. Our analysis for the blind attacker, informed learner setting and our examples in
Section 5 for both settings draw heavily from [21].
In fact, the blind attack setting is by nature locally differentially private with the likelihood ratio
?f (z)+(1??)g (z)
upper bounded by maxz ? (1??)g? (z)? , as in the blind attack setting only ? of the data points
are drawn from the distribution of interest F . This immediately suggests bounds on the minimax
rates of convergence according to [20]. However, the rates we obtain by appropriate choice of G? by
the attacker obtain lower bounds on the rate of convergence which are often much slower than the
bounds due to differential privacy obtained by arbitrary choice of G? .
The rest of this work proceeds as follows. Section 3.1 formalizes our notation. Section 3.2 introduces
our minimax framework and the standard techniques of lower bounding the minimax risk by reduction
3
from parameter estimation to testing. Sections 3.3 and 3.4 discuss the ?blind attacker; informed
learner? and ?blind attacker; blind learner? settings in this minimax framework. Section 3.5 briefly
proposes how this framework could be extended to consider an ?informed attacker? which observes
the true distribution of interest F? . Section 4 provides a summary of the main results. In Section 5 we
give examples of estimating a mean under blind attack in both the informed and blind learner setting
and performing linear regression in the informed learner setting. In Section 6 we conclude. Proof of
the main results is presented in the appendix.
3
3.1
Problem formulation
Notation
We denote the ?uncorrupted? data with the random variables X1:n . Fi is the distribution and fi the
density of each Xi conditioning on ? = ?i ? ?; F? and f? are the generic distribution and density
0
parametrized by ?. We denote malicious data with the random variables X1:n
. In the ?informed
0
learner? setting, G is the distribution and g the density from which each Xi is drawn. In the ?blind
learner? setting, Gj and gj are the distribution and density of Xi0 conditioning on ? = ?j ? ?; G?
and g? are the generic distribution and density parametrized by ?. We denote the observed data Z1:n ,
which is distributed according to (1). Pi is the distribution and pi the density of each Zi , conditioning
on ? = ?i and ? = ?i . P? or P?,? is the parametrized form. We say that Pi = ?Fi + (1 ? ?)Gi ,
or equivalently pi (z) = ?fi (z) + (1 ? ?)gi (z), to indicate that Pi is a weighted mixture of the
distributions Fi and Gi . We assume that X, X 0 and Z have the same support, denoted Z. Mn is the
minimax risk of a learner. DKL (P1 ||P2 ) is the KL-divergence. ||P1 ? P2 ||TV is the total variation
distance. I(Z, V ) is the mutual information between the random variables Z and V . ??n : Z n ? ?
denotes an arbitrary estimator for ? with a sample size of n; ??n : Z n ? ? denotes an arbitrary
estimator for an arbitrary parameter vector ? with a sample size of n.
3.2
Minimax framework
The minimax risk of estimating a parameter ? ? ? is equal to the risk of the estimator ??n which
achieves smallest maximal risk across all ? ? ?:
Mn = inf sup EZ1:n ?P?n L(?, ??n ).
? ???
?
The minimax risk thus provides a strong guarantee: the population risk of an estimator can be no
worse than the minimax risk, no matter which ? ? ? happens to be the true parameter. Our analysis
aims to build insight into how the minimax risk increases when the training set is subjected to blind
data injection attacks. In the informed learner setting we fix some ? and G? , and consider ? = ?,
letting L(?, ??n ) be the squared `2 distance ||? ? ??n ||22 . In the blind learner setting we account for
there being two parameters unknown to the learner ? and ? by letting ? = ? ? ? and considering a
loss function which depends only on the value of ? and its estimator, L(?, ??n ) = ||? ? ??n ||22
We follow the standard approach to lower bounding the minimax risk [22], reducing the problem of
estimating ? to that of testing the hypothesis H : V = Vj for Vj ? V, where V ? U(V), a uniform
distribution across V. V ? ? is an appropriate finite packing of the parameter space.
The Le Cam method provides lower bound on the minimax risk of the learner in terms of the KL
divergence DKL (P?1 ||P?2 ) for ?1 , ?2 ? ? [22]:
q
i
h1
1
Mn ? L(?1 , ?2 ) ? ?
nDKL (P?1 ||P?2 ) .
(2)
2 2 2
The Fano method provides lower bounds on the minimax risk of the learner in terms of the mutual
information I(Z, V ) between the observed data and V chosen uniformly at random from V, where
L(Vi , Vj ) ? 2? ?Vi , Vj ? V [22]:
h
I(Z1:n ; V ) + log 2 i
Mn ? ? 1 ?
.
log |V|
4
(3)
The mutual information is upper bounded by the pariwise KL divergences as
n XX
I(Z1:n , V ) ?
DKL (PVi ||PVj ).
|V|2 i j
3.3
(4)
Blind attacker, informed learner
In this setting we assume the attacker does not know F? but does know F. The learner knows both
G? and ? prior to picking an estimator. In this case, as G? is known, we do not need to consider a
distribution over possible values of ?; instead, we consider some fixed p(z|x). The attacker chooses
G? to attempt to maximally lower bound the minimax risk of the learner:
?? = argmax? Mn = argmax? inf sup EZ1:n ?P?,? L(?, ??n ),
?? ???
where L(?, ?0 ) is the learner?s loss function; in our case the squared `2 distance ||? ? ?0 ||22 .
The attacker chooses a malicious distribution G?? which minimizes the sum of KL-divergences
between the distributions indexed by V:
?? = argmin?
X X
DKL (P?i ,? ||P?j ,? ) ?
?i ?V ?j ?V
|V|2
I(Zn ; ?),
n
P?i ,? = ?F?i + (1 ? ?)G? .
where
This directly provides lower bounds on the minimax risk of the learner via (2) and (3).
3.4
Blind attacker, blind learner
In this setting, the learner does not know the specific malicious distribution G? used to inject points
into the training set, but is allowed to know the family G = {G? : ? ? ?} from which the attacker
picks this distribution. We propose that the minimax risk is thus with respect to the worst-case choice
of both the true parameter of interest ? and the parameter of the malicious distribution ?:
Mn = inf
sup
?? (?,?)????
EZ1:n ?P?,? L(?, ??n ).
That is, the minimax risk in this setting is taken over worst-case choice of the parameter pair
? is with respect to only the true value of of ? and its estimator ?.
?
(?, ?) ? ? ? ?, but the loss L(?, ?)
The attacker thus designs a family of malicious distributions G = {G? : ? ? ?} so as to maximally
lower bound the minimax risk:
G ? = argmax inf
sup
?? (F? ,G? )?F ?G
?
EZ1:n L(?, ?).
We use the Le Cam approach (2) in this setting. To accommodate the additional set of parameters ?
we consider nature picking (?, ?) from ???. The loss function is L (?i , ?i ), (?j , ?j ) = ||?i ??j ||22 ,
and thus only depends on ?. Therefore when constructing our hypothesis set we must choose wellseparated ? but may arbitrarily pick each element ?. The problem reduces from that of estimating
? to that of testing the hypothesis H : (?, ?) = (?, ?)j for (?, ?)j ? V, where nature chooses
(?, ?) ? U(V).
The attacker again lower bounds the minimax risk by choosing G to minimize an upper bound on
the pairwise KL divergences. Unlike the informed learner setting where the KL divergence was
between the distributions indexed by ?i and ?j with ? fixed, here the KL divergence is between the
distributions indexed by appropriate choice of pairings (?i , ?i ) and (?j , ?j ):
G? = argminG
X
X
DKL (P?i ,?i ||P?j ,?j ) ?
(?i ,?i )?V (?j ,?j )?V
where
P?i ,?i = ?F?i + (1 ? ?)G?i .
5
|V|2
I(Zn ; ?),
n
3.5
Informed attacker
We leave this setting as future work, but briefly propose a formulation for completeness. In this
setting the attacker knows F? prior to picking G? . We assume that the learner picks some ?? which
is minimax-optimal over F and G as defined in Section 1.5 and 1.6 respectively. We denote the
? The attacker picks G? ? G so as to maximally lower bound
appropriate set of such estimators as ?.
the risk for any ?? ? ?:
? = EZ ?P L(?, ??n ).
R?,? (?)
1:n
?,?
This is similar to the setting in [11], with the modification that the learner can use any (potentially
non-convex) algorithm and estimator. The attacker must therefore identify an optimal attack using
information-theoretic techniques and knowledge of F? , rather than inverting the learner?s convex
learning problem and using convex optimization to maximize the learner?s risk.
4
4.1
Main results
Informed learner, blind attacker
In the informed learner setting, the attacker chooses a single malicious distribution (known to the
learner) from which to draw malicious data.
Theorem 1 (Uniform attack). The attacker picks g? (z) := g uniform over Z in the informed learner
setting. We assume that Z is compact and that G Fi Fj ??i , ?j ? ?. Then:
DKL (Pi ||Pj ) + DKL (Pj ||Pi ) ?
?2
||Fi ? Fj ||2TV Vol(Z) ??i , ?j ? ?.
(1 ? ?)
The proof modifies the analysis used to prove Theorem 1 in [21] and is presented in the appendix. By
applying Le Cam?s method to P1 and P2 as described in the theorem, we find:
Corollary 1.1 (Le Cam bound with uniform attack). Given a data injection attack as described in
Theorem 1, the minimax risk of the learner is lower bounded by
s
1
1
?2
Mn ? L(?1 , ?2 )
? ?
n||F1 ? F2 ||2TV Vol(Z) .
2 2 2 (1 ? ?)
We turn to the local Fano method. Consider the traditional setting (P? = F? ), and consider a packing
set V of ? which obeys L(?i , ?j ) ? 2? ??i , ?j ? V, and where the KL divergences are bounded such
that there exists some fixed ? fulfilling DKL (Fi ||Fj ) ? ?? ??i , ?j ? V. We can use this inequality
and the bound on mutual information in (4) to rewrite the Fano bound in (3) as:
h
n?? + log 2 i
Mn ? ? 1 ?
.
log |V|
If we consider the uniform attack setting with the same packing set V of ?, then by applying Theorem
1) in addition to the bound on mutual information in (4) to the standard fano bound in (3), we obtain:
Corollary 1.2 (Local Fano bound with uniform attack). Given a data injection attack as described in
Theorem 1, and given any packing V of ? so such L(?i , ?j ) ? 2? ??i , ?j ? V and DKL (Fi ||Fj ) ? ??
??i , ?j ? V, then the minimax risk of the learner is lower bounded by
Mn ? ? 1 ?
?2
(1??) Vol(Z)n? ?
log |V |
+ log 2
.
Remarks. Comparing the two corollaries to the standard form of the Le Cam and Fano bounds shows
?2
that a uniform attack has the effect of upper-bounding the effective sample size at n (1??)
Vol(Z).
The range of ? for which this bound results in a reduction in effective sample size beyond the trivial
reduction to ?n depends on Vol(Z). We illustrate the consequences of these corollaries for some
classical estimation problems in Section 3.
6
4.2
Blind learner, blind attacker
We begin with a lemma that shows that for ? ? 12 the attacker can make learning impossible beyond
permutation for higher rates of injection. Similar results have been shown in [18] among others, and
this is included for completeness.
Lemma 1 (Impossibility of learning beyond permutation for ? ? 0.5). Consider any hypotheses ?1
and ?2 , with F1 F2 and F2 F1 . We construct V = {F, G}2 = {(F1 , G1 ), (F2 , G2 )}. For all
? ? 0.5, there exist choices of G1 and G2 such that DKL (P1 ||P2 ) + DKL (P2 ||P1 ) = 0.
2 (z)
The proof progresses by considering g1 (z) = ?f
(1??) + c, g2 (z) =
P2 ||TV = 0. Full proof is provided in the appendix.
?f1 (z)
(1??)
+ c, such that ||P1 ?
It is unnecessary to further consider values of ? less than 0.5. We proceed with an attack where
the attacker chooses a family of malicious distributions G which mimics the family of candidate
distributions of interest F, and show that this increases the lower bound on the learner?s minimax
risk for 0.5 < ? < 43 .
Theorem 2 (Mimic attack). Consider any hypotheses ?1 and ?2 , with F1 F2 and F2 F1 . The
attacker picks G = F. We construct V = {F, G}2 = {(F1 , G1 ), (F2 , G2 )} where G1 = F2 and
G2 = F1 . Then:
(2? ? 1)2
?4
DKL (P1 ||P2 ) + DKL (P2 ||P1 ) ?
||F1 ? F2 ||TV ? 4
||F1 ? F2 ||2TV .
1??
1??
?
The proof progresses by upper bounding | log pp21 (z)
(z) | by log 1?? , and consequently upper bounding
the pairwise KL divergence in terms of the total variation distance. It is presented in the appendix. By
applying the standard Le Cam bound with the the bound on KL divergence provided by the theorem,
we obtain:
Corollary 2.1 (Le Cam bound with mimic attack). Given a data injection attack as described in
Theorem 2, the minimax risk of the learner is lower bounded by
r
1
1
(2? ? 1)2
Mn ? L(?1 , ?2 )
??
n||F1 ? F2 ||2TV .
2
1??
2
Remarks. For ? ? [0, 43 ], comparing the corollary to the standard form of the Le Cam bound shows
2
that this attack reduces the effective sample size from n to (2??1)
1?? n. We illustrate the consequences
of this corollary for estimating a mean in Section 3. There are two main differences in the result from
the bound for the uniform attack. Firstly, the dependence on (2? ? 1)2 instead of ?2 means that the
KL divergence rapidly approaches zero as ? ? 12 , rather than as ? ? 0 as in the uniform attack.
Secondly, there is no dependence on the volume of the support of the data.
5
Minimax rates of convergence under blind attack
We analyze the minimax risk in the settings of mean estimation and of fixed-design linear regression
by showing how the blind attack forms of the Le Cam and Fano bounds modify the lower bounds on
the minimax risk for each model.
5.1
Mean estimation
In this section we address the simple problem of estimating a one-dimensional mean when the training
set is subject to a blind attack. Consider the following family, where ? is the interval [?1, 1]:
F = {F? : EF? X = ?; EF? X 2 ? 1; ? ? ?}.
We apply Theorems 1 and 2 and the associated Le Cam bounds to obtain:
Proposition 1 (Mean estimation under uniform attack ? blind attacker, informed learner). If the
attacker carries out a uniform attack as presented in theorem 1, then there exists a universal constant
0 < c < ? such that the minimax risk is bounded as:
h r 1 ? ?i
Mn ? c min 1, 2 2
.
? n
7
The proof is direct by using the uniform-attack form of the Le Cam lower bound on minimax risk
presented in corollary 1.1 in the proof of (20) in [21] in place of the differentially private form of the
lower bound in equation (16) of that paper.
Proposition 2 (Mean estimation under mimic attack ? blind attacker, blind learner). If the attacker
carries out a mimic attack as presented in theorem 2, then there exists a universal constant 0 < c < ?
such that the minimax risk is bounded as:
r
h
1 ? ?i
1
.
Mn ? c min 1,
2 ? 4?
n
The proof is direct by using the mimic-attack form of the Le Cam lower bound on minimax risk
presented in corollary 2.1 in the proof of (20) in [21] in place of the differentially private form of the
lower bound in equation (16) of that paper.
5.2
Linear regression with fixed design
We now consider the minimax risk in a standard fixed-design linear regression problem. Consider a
fixed design matrix X ? Rn?d , and the standard linear model
Y = X?? + ,
where ? Rn is a vector of independent noise variables with each entry of the noise vector upper
bounded as |i | ? ? < ? ?i. We assume that the problem is appropriately scaled so that ||X||? ? 1,
||Y ||? ? 1, and so that it suffices to consider ?? ? ?, where ? = Sd is the d-dimensional unit
sphere. The loss function is the squared `2 loss with respect to ?? : L(??n , ?? ) = ||??n ? ?? ||22 . It is
also assumed that X is full rank to make estimation of ? possible.
Proposition 3 (Linear regression under uniform attack - blind attacker, informed learner). If the
attacker carries out a uniform attack per Theorem 1, and si (A) is the ith singular value of A, then
the minimax risk is bounded by
h
? 2 d(1 ? ?) i
?
Mn ? min 1,
.
2
n? s2max (X/ n)
The proof is direct by using the uniform-attack form of the Fano lower bound on minimax risk
presented in corollary 1.2 in the proof of (22) in [21] in place of the differentially private form of
the lower bound in equation (19) of that paper, noting ?
that Vol(Z) ? 1 by construction. If we
consider the orthonormal design case such that s2max (X/ n) = 1, and recall that lower bounds on
2
the minimax risk in linear regression in traditional settings is O( ?nd ), we see a clear reduction in
2
?
effective sample size from n to 1??
n.
6
Discussion
We have approached the problem of data injection attacks on machine learners from a statistical
decision theory framework, considering the setting where the attacker does not observe the true
distribution of interest or the learner?s training set prior to choosing a distribution from which to draw
malicious examples. This has applications to the theoretical analysis of both security settings, where
an attacker attempts to compromise a machine learner through data injection, and privacy settings,
where a user of a service aims to protect their own privacy by sumbitting some proportion of falsified
data. We identified simple attacks in settings where the learner is and is not aware of the malicious
distribution used which reduce the effective sample size when considering rates of convergence of
estimators. These attacks maximize lower bounds on the minimax risk. These lower bounds may
not be tight, and we leave as future work thorough exploration of optimality of attacks in this setting
and the establishing of optimal estimation procedures in the presence of such attacks. Exploration of
attacks on machine learners in the minimax framework should lead to better understanding of the
influence an attacker might have over a learner in settings where the attacker has little information.
References
(1)
M. Barreno, B. Nelson, R. Sears, A. D. Joseph and J. D. Tygar, ACM Symposium on Information, computer and communications security, 2006.
8
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
M. Barreno, B. Nelson, A. D. Joseph and J. Tygar, Machine Learning, 2010, 81, 121?148.
P. Laskov and M. Kloft, ACM workshop on security and artificial intelligence, 2009.
P. Laskov and R. Lippmann, Machine learning, 2010, 81, 115?119.
H. Xiao, H. Xiao and C. Eckert, European Conference on Artificial Intelligence, 2012, pp. 870?
875.
B. Biggio, B. Nelson and P. Laskov, arXiv preprint arXiv:1206.6389, 2012.
B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, N. Taft and D. Tygar, EECS
Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2008-73, 2008.
R. Sommer and V. Paxson, IEEE Symposium on Security and Privacy, 2010.
R. J. Bolton and D. J. Hand, Statistical science, 2002, 235?249.
M. Al Hasan, V. Chaoji, S. Salem and M. Zaki, SDM Workshop on Link Analysis, Counterterrorism and Security, 2006.
S. Mei and X. Zhu, Association for the Advancement of Artificial Intelligence, 2015.
W. Liu and S. Chawla, IEEE International Conference on Data Mining, 2009.
S. Alfeld, X. Zhu and P. Barford, Association for the Advancement of Artificial Intelligence,
2016.
M. Bruckner and T. Scheffer, ACM SIGKDD, 2011.
C. Dwork, in Automata, languages and programming, Springer, 2006, pp. 1?12.
M. Chen, C. Gao and Z. Ren, arXiv preprint arXiv:1511.04144, 2015.
M. Azizyan, A. Singh and L. Wasserman, Neural Information Processing Systems, 2013.
Y. Chen, X. Yi and C. Caramanis, arXiv preprint arXiv:1312.7006, 2013.
M. Kearns and M. Li, SIAM Journal on Computing, 1993, 22, 807?837.
J. Duchi, M. J. Wainwright and M. I. Jordan, Neural Information Processing Systems, 2013.
J. Duchi, M. Wainwright and M. Jordan, arXiv preprint arXiv:1302.3203v4, 2014.
A. B. Tsybakov, Introduction to Nonparametric Estimation, Springer Publishing Company,
Incorporated, 1st, 2008.
9
| 6482 |@word private:5 briefly:2 polynomial:1 proportion:2 nd:1 pick:6 accommodate:1 shot:1 carry:4 reduction:5 liu:2 omniscient:1 counterterrorism:2 current:1 comparing:2 si:1 yet:2 tackling:1 must:3 s2max:2 intelligence:4 advancement:2 ith:1 provides:6 completeness:2 attack:59 firstly:2 direct:3 differential:7 symposium:2 pairing:1 prove:1 privacy:15 x0:2 pairwise:3 huber:3 hardness:1 behavior:1 p1:8 growing:1 company:1 little:2 inappropriate:1 considering:4 becomes:1 spain:1 provided:3 estimating:7 bounded:10 notation:2 xx:1 begin:1 what:1 argmin:1 minimizes:1 informed:19 fabricated:1 formalizes:1 guarantee:1 thorough:1 berkeley:1 scaled:1 control:1 unit:1 medical:1 service:5 engineering:2 local:5 modify:1 sd:1 limit:2 consequence:2 analyzing:1 establishing:2 might:4 studied:1 collect:3 suggests:1 range:1 obeys:1 practical:1 testing:3 differs:2 procedure:2 mei:1 universal:2 fraud:1 suggest:1 storage:1 risk:41 influence:3 applying:3 impossible:1 maxz:1 maximizing:1 modifies:1 convex:3 automaton:1 formalized:1 immediately:1 wasserman:1 estimator:17 insight:1 orthonormal:1 financial:2 population:1 variation:2 argming:1 construction:2 heavily:1 user:7 anomaly:1 exact:1 programming:1 designing:2 hypothesis:9 element:1 observed:5 inserted:2 preprint:4 wang:1 worst:2 contamination:3 observes:2 cam:12 carrying:1 rewrite:1 tight:1 singh:1 compromise:1 f2:11 learner:81 packing:4 caramanis:1 sears:1 distinct:1 effective:9 query:2 approached:1 artificial:4 rubinstein:1 outside:1 choosing:2 insider:2 widely:1 solve:1 say:1 ability:4 statistic:3 gi:3 g1:5 online:1 sdm:1 propose:2 maximal:1 beatson:1 adaptation:1 rapidly:1 consent:1 differentially:5 convergence:8 generating:4 leave:3 illustrate:2 progress:2 p2:8 strong:4 indicate:1 exhibiting:1 exploration:2 taft:1 fix:1 f1:12 suffices:1 proposition:3 secondly:2 normal:2 achieves:2 smallest:1 released:1 estimation:11 sensitive:1 weighted:1 clearly:1 aim:4 rather:5 corollary:10 focus:2 rank:1 likelihood:3 intrusion:1 impossibility:1 sigkdd:1 adversarial:1 defend:1 tech:1 inference:1 barford:1 provably:2 among:1 denoted:1 proposes:1 tygar:3 mutual:5 field:2 genuine:1 aware:2 having:1 equal:1 construct:2 future:3 mimic:6 contaminated:1 others:1 preserve:1 divergence:12 individual:1 argmax:3 statistician:1 attempt:4 detection:3 organization:3 interest:9 mining:1 dwork:1 introduces:1 mixture:4 wellseparated:1 necessary:1 indexed:3 theoretical:1 earlier:1 zn:3 entry:2 uniform:15 eec:2 chooses:6 st:1 density:9 international:1 siam:1 kloft:1 probabilistic:2 v4:1 harming:1 picking:3 concrete:1 squared:3 again:1 choose:1 huang:1 worse:1 inject:3 li:1 account:1 potential:1 zhaoran:2 salem:1 matter:1 blind:34 depends:3 vi:2 h1:1 observing:2 analyze:3 traffic:2 sup:4 contribution:1 minimize:2 largely:1 who:1 gathered:1 identify:2 iterated:2 accurately:1 ren:2 corruption:2 classified:2 history:1 against:1 pp:2 proof:12 associated:1 monitored:1 gain:1 dataset:5 treatment:1 recall:1 realm:1 knowledge:9 higher:1 zaki:1 follow:1 maximally:4 formulation:2 done:1 just:1 hand:1 effect:1 concept:1 true:9 round:2 game:4 biggio:1 theoretic:3 complete:1 duchi:3 fj:4 ef:2 recently:1 fi:9 common:1 conditioning:3 volume:1 discussed:1 xi0:2 association:2 fano:8 language:1 access:2 han:1 gj:2 add:1 own:2 perspective:1 belongs:3 inf:4 scenario:3 inequality:1 rep:1 arbitrarily:1 yi:1 uncorrupted:2 seen:1 greater:1 additional:1 maximize:3 multiple:1 full:2 reduces:2 x10:1 long:1 sphere:1 ez1:4 dkl:13 prediction:1 regression:7 arxiv:8 receive:1 preserved:1 addition:1 interval:1 singular:1 malicious:23 appropriately:1 hasan:1 rest:1 unlike:2 subject:2 jordan:3 noting:1 presence:1 ethnicity:1 zi:6 identified:2 restrict:1 reduce:1 idea:1 pca:1 proceed:1 action:2 remark:2 clear:1 nonparametric:1 tsybakov:1 locally:1 deliberate:1 exist:1 arising:1 per:1 vol:6 key:1 drawn:7 prevent:2 pj:2 obligation:1 fraction:3 sum:1 you:1 place:3 family:10 reasonable:1 draw:3 decision:2 appendix:4 bolton:1 bound:46 laskov:3 activity:2 alex:1 your:1 pvi:1 aspect:1 min:3 optimality:1 performing:1 injection:15 department:4 tv:7 according:3 belonging:1 across:2 joseph:3 making:4 happens:2 modification:1 lau:1 fulfilling:1 taken:1 legal:1 equation:3 previously:1 discus:1 turn:1 know:13 disclosure:2 letting:2 subjected:1 barreno:2 studying:2 adopted:1 operation:3 apply:1 observe:3 enforce:2 appropriate:4 generic:2 chawla:1 robustness:1 slower:1 assumes:1 denotes:2 include:2 sommer:1 publishing:1 hinge:1 build:1 establish:2 classical:1 quantity:1 strategy:5 dependence:2 traditional:2 distance:4 link:2 parametrized:3 nelson:4 trivial:1 ratio:1 pvj:1 equivalently:1 potentially:1 paxson:1 design:7 unknown:1 attacker:62 perform:1 upper:8 finite:1 immediate:1 extended:1 communication:2 incorporated:1 rn:2 arbitrary:5 inverting:1 pair:1 kl:12 extensive:1 z1:4 connection:4 security:11 crime:1 california:1 learned:1 established:1 barcelona:1 protect:1 nip:1 address:3 able:2 adversary:1 beyond:4 usually:2 proceeds:1 wainwright:3 unrealistic:1 treated:1 mn:13 minimax:47 representing:1 zhu:2 carried:1 metadata:1 prior:3 literature:3 understanding:1 loss:6 permutation:2 mixed:1 filtering:1 degree:1 xiao:2 corrupting:1 azizyan:1 pi:7 eckert:1 summary:1 bruckner:1 weaker:1 face:1 taking:1 distributed:2 feedback:1 xn:1 unaware:1 collection:1 made:1 spam:1 income:1 social:1 unstudied:1 compact:1 lippmann:1 reveals:1 harm:3 assumed:3 conclude:1 unnecessary:1 xi:7 compromised:1 learn:2 nature:3 robust:5 european:1 constructing:1 submit:1 vj:4 main:4 motivation:1 bounding:5 noise:2 allowed:1 body:2 x1:3 scheffer:1 wish:3 candidate:3 theorem:13 specific:2 hanliu:1 pac:1 showing:1 svm:1 submitting:1 concern:1 exists:3 workshop:2 importance:1 conditioned:1 chen:3 falsified:1 gao:2 ez:1 g2:5 vulnerable:1 springer:2 acm:3 conditional:1 consequently:1 included:1 reducing:1 uniformly:1 lemma:2 kearns:1 total:2 xn0:1 ucb:1 select:1 support:2 princeton:6 |
6,061 | 6,483 | Minimax Estimation of Maximum Mean Discrepancy
with Radial Kernels
Ilya Tolstikhin
Department of Empirical Inference
MPI for Intelligent Systems
T?bingen 72076, Germany
ilya@tuebingen.mpg.de
Bharath K. Sriperumbudur
Department of Statistics
Pennsylvania State University
University Park, PA 16802, USA
bks18@psu.edu
Bernhard Sch?lkopf
Department of Empirical Inference
MPI for Intelligent Systems
T?bingen 72076, Germany
bs@tuebingen.mpg.de
Abstract
Maximum Mean Discrepancy (MMD) is a distance on the space of probability
measures which has found numerous applications in machine learning and nonparametric testing. This distance is based on the notion of embedding probabilities in a
reproducing kernel Hilbert space. In this paper, we present the first known lower
bounds for the estimation of MMD based on finite samples. Our lower bounds
hold for any radial universal kernel on Rd and match the existing upper bounds up
to constants that depend only on the properties of the kernel. Using these lower
bounds, we establish the minimax rate optimality of the empirical estimator and its
U -statistic variant, which are usually employed in applications.
1
Introduction
Over the past decade, the notion of embedding probability measures in a Reproducing Kernel
Hilbert Space (RKHS) [1, 13, 18, 17] has gained a lot of attention in machine learning, owing to
its wide applicability. Some popular applications of RKHS embedding of probabilities include twosample testing [5, 6], independence [7] and conditional independence testing [3], feature selection
[14], covariate-shift [13], causal discovery [9], density estimation [15], kernel Bayes? rule [4],
and distribution regression [20]. This notion of embedding probability measures can be seen as a
generalization of classical kernel methods which deal with embedding points of an input space as
elements in an RKHS. Formally, given a probability measure P and a continuous positive definite
real-valued kernel k (we denote H to be the corresponding
RKHS) defined on a separable topological
R
space X , P is embedded into H asp
?P := k(?, x) dP (x), called the mean element or the kernel
R
mean assuming k and P satisfy X k(x, x) dP (x) < 1. Based on the above embedding of P , [5]
defined a distance?called the Maximum Mean Discrepancy (MMD)?on the space of probability
measures as the distance between the corresponding mean elements, i.e.,
MMDk (P, Q) = k?P
? Q kH .
We refer the reader to [18, 17] for a detailed study on the properties of MMD and its relation to other
distances on probabilities.
Estimation of kernel mean. In all the above mentioned applications, since the only knowledge of
the underlying distribution is through random samples drawn from it, an estimate of ?P is employed
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
in practice. In applications such as two-sample test [5, 6] and independence test [7] that involve
MMD, an estimate of MMD is constructed based on the estimates of ?P and ?
PQnrespectively. The
simple and most popular estimator of ?P is the empirical estimator, ?Pn := n1 i=1 k(?, Xi ) which
is a Monte Carlo approximation of ?P based on random samples (Xi )ni=1 drawn i.i.d. from P .
Recently, [10] proposed a shrinkage estimator of ?P based on the idea of James-Stein shrinkage,
which
p is demonstrated to empirically outperform ?Pn . While both these estimators are shown to
be n-consistent [13, 5, 10], it was not clear until the recent work of [21] whether any of these
estimators are minimax rate optimal, i.e., is there an estimator of ?P that yields a convergence
rate
Pn
faster than n 1/2 ? Based on the minimax optimality of the sample mean (i.e., X := n1 i=1 Xi ) for
the estimation of a finite dimensional mean of a normal distribution at a minimax rate of n 1/2 [8,
Chapter 5, Example 1.14], while one can intuitively argue that the empirical and shrinkage estimators
of ?P are minimax rate optimal, it is difficult to extend the finite dimensional argument in a rigorous
manner to the estimation of the infinite dimensional object, ?P . Note that H is infinite dimensional
if k is universal [19, Chapter 4], e.g., Gaussian kernel. By establishing a remarkable relation between
the MMD of two Gaussian distributions and the Euclidean distance between their means for any
bounded continuous translation invariant universal kernel on X = Rd , [21] rigorously showed that
the estimation of ?P is only as hard as the estimation of the finite dimensional mean of a normal
distribution and thereby established the minimax rate of estimating ?P to be n 1/2 . This in turn
demonstrates the minimax rate optimality of empirical and shrinkage estimators of ?P .
Estimation of MMD. In this paper, we are interested in the minimax optimal estimation of
MMDk (P, Q). The question of finding optimal estimators of MMD is of interest in applications such
as kernel-based two-sample [5] and independence tests [7] as the test statistic is indeed an estimate of
MMD and it is important to use statistically optimal estimators in the construction of these kernel
based tests. An estimator of MMD that is currently employed in these applications is based on the
empirical estimators of ?P and ?Q , i.e.,
MMDn,m := k?Pn
? Q m kH ,
i.i.d.
i.i.d.
which is constructed from samples (Xi )ni=1 ? P and (Yi )m
i=1 ? Q. [5, 7] also considered a
U -statistic variant of MMDn,m as a test statistic in these applications. As discussed above, while ?Pn
and ?Qm are minimax rate optimal estimators of ?P and ?Q respectively, they need not guarantee
that MMDn,m is minimax rate optimal. Using the fact that k?Pn ?P kH = Op (n 1/2 ) and
|MMDk (P, Q)
it is easy to see that
MMDn,m | ? k?P
|MMDk (P, Q)
?Pn kH + k?Qm
MMDn,m | = Op (n
1/2
+m
1/2
? Q kH ,
(1)
).
In fact, if k is a bounded kernel, it can be shown that the constants (which are hidden in the order
notation in (1)) depend only on the bound on the kernel and are independent of X , P and Q. The
goal of this work is to find the minimax rate rn,m,k (P) and a positive constant ck (P) (independent
of m and n) such that
n
o
1
inf sup P n ? Qm rn,m,k
(P) |F?n,m MMDk (P, Q)| ck (P) > 0,
(2)
F?n,m P,Q2P
where P is a suitable subset of Borel probability measures on X , the infimum is taken over all
+
n
m
estimators F?n,m mapping the i.i.d. sample {(Xi )ni=1 , (Yi )m
denotes the
i=1 } to R , and P ? Q
i.i.d.
i.i.d.
probability measure associated with the sample when (Xi )ni=1 ? P and (Yi )m
i=1 ? Q. In addition
to the rate, we are also interested in the behavior of ck (P) in terms of its dependence on k, X and P.
Contributions.p The main contribution of the paper is in establishing m 1/2 + n 1/2 , i.e.,
rn,m,k (P) = (m + n)/mn as the minimax rate for estimating MMDk (P, Q) when k is a radial universal kernel (examples include the Gaussian, Mat?rn and inverse multiquadric kernels) on Rd
and P is the set of all Borel probability measures on Rd with infinitely differentiable densities. This
result guarantees that MMDn,m and its U -statistic variant are minimax rate optimal estimators of
MMDk (P, Q), which thereby ensures the minimax optimality of the test statistics used in kernel
two-sample and independence tests. We would like to highlight the fact that our result of the minimax
lower bound on MMDk (P, Q) implies part of the results of [21] related to the minimax estimation
2
of ?P , as it can be seen that any ?-accurate estimators ?
?P and ?
?Q of ?P and ?Q respectively in the
RKHS norm lead to the 2?-accurate estimator F?n,m := k?
?P ?
?Q kH of MMDk (P, Q), i.e.,
ck (P)(n
1/2
+m
1/2
) ? |MMDk (P, Q)
F?n,m | ? k?P
?
?P kH + k?Q
?
? Q kH .
In Section 2, we present the main results of our work wherein Theorem 1 is developed by employing
the ideas of [21] involving Le Cam?s method (see Theorem 3) [22, Sections 2.3 and 2.6]. However,
we show that while the minimax rate is m 1/2 + n 1/2 , there is a sub-optimal dependence on d in
the constant ck (P) which makes the result uninteresting in high dimensional scenarios. To alleviate
this issue, we present a refined result in Theorem 2 based on the method of two fuzzy hypotheses (see
Theorem 4) [22, Section 2.7.4] which shows that ck (P) in (2) is independent of d (i.e., X ). This result
provides a sharp lower bound for MMD estimation both in terms of the rate and the constant (which
is independent of X ) that matches with behavior of the upper bound for MMDn,m . The proofs of
these results are provided in Section 3 while supplementary results are collected in an appendix.
Notation. In this work we focus on radial kernels, i.e., k(x, y) = (kx yk2 ) for all x, y 2 Rd .
Schoenberg?s theorem [12] states that a radial kernel k is positive definite for every d if and only if
there exists a non-negative finite Borel measure ? on [0, 1) such that
Z 1
2
k(x, y) =
e tkx yk d?(t)
(3)
0
for all x, y 2 R . An important example of a radial kernel is the Gaussian kernel k(x, y) =
exp{ kx yk2 /(2? 2 )} for ? 2 > 0. [17, Proposition 5] showed that k in (3) is universal if and only
if supp(?) 6= {0}, where for a finite non-negative Borel measure ? on Rd we define supp(?) =
{x 2 Rd | if x 2 U and U is open then ?(U ) > 0}.
d
2
Main results
In this section, we present the main results of our work wherein we develop minimax lower bounds for
the estimation of MMDk (P, Q) when k is a radial universal kernel on Rd . We show that the minimax
i.i.d.
i.i.d.
rate for estimating MMDk (P, Q) based on random samples (Xi )ni=1 ? P and (Yi )m
i=1 ? Q is
m 1/2 +n 1/2 , thereby establishing the minimax rate optimality of the empirical estimator MMDn,m
of MMD(P, Q). First, we present the following result (proved in Section 3.1) for Gaussian kernels,
which is based on an argument similar to the one used in [21] to obtain a minimax lower bound for
the estimation of ?P .
Theorem 1. Let P be the set of all Borel probability measures over Rd with continuously infinitely
differentiable densities. Let k be a Gaussian kernel with bandwidth parameter ? 2 > 0. Then the
following holds:
(
)
r
?
1
1
1
1
1
inf sup P n ? Qm MMDk (P, Q) F?n,m
max p , p
. (4)
?
8
d
+
1
5
n
m
Fn,m P,Q2P
The following remarks can be made about Theorem 1.
(a) Theorem 1 shows that MMDk (P, Q) cannot be estimated at a rate faster than max{n 1/2 , m 1/2 }
1
1/2
by any estimator F?n,m for all P, Q 2 P. Since max{m 1/2 , n 1/2 }
+ n 1/2 ), the
2 (m
result combined with (1) therefore establishes the minimax rate optimality of the empirical estimator,
MMDn,m .
(b) While Theorem 1 shows the right order of dependence on m and n, the dependence on d seems
to be sub-optimal as the upper bound on |MMDn,m MMDk (P, Q)| depends only on the bound
on the kernel and is independent of d. This sub-optimal dependence on d may be due to the fact the
proof of Theorem 1 (see Section 3.1) as aforementioned is closely based on the arguments applied
in [21] for the minimax estimation of ?P . While the lower bounding technique used in [21]?which
is commonly known as Le Cam?s method based on many hypotheses [22, Chapter 2]?provides
optimal results in the problem of estimation of functions (e.g., estimation of ?P in the norm of H), it
often fails to do so in the case of estimation of real-valued functionals, which is precisely the focus of
our work. Even though Theorem 1 is sub-optimal, we presented the result to highlight the fact that
3
the minimax lower bounds for estimation of ?P may not yield optimal results for MMDk (P, Q). In
Theorem 2, we will develop a new argument based on two fuzzy hypotheses, which is a method of
choice for nonparametric estimation of functionals [22, Section 2.7.4]. This will allow us to get rid of
the superfluous dependence on the dimensionality d in the lower bound.
(c) While Theorem 1 holds for only Gaussian kernels, we would like to mention that by using
the analysis of [21], Theorem 1 can be straightforwardly improved in various ways: (i) it can be
generalized to hold for a wide class of radial universal kernels, (ii) the factor d 1/2 in (4) can be
removed altogether for the case when P consists of all Borel discrete distributions on Rd . However,
these improvements do not involve any novel ideas than those captured by the proof of Theorem 1
and so will not be discussed in this work. For details, we refer an interested reader to Theorems 2
and 6 of [21] for extension to radial universal kernels and discrete measures, respectively.
(d) Finally, it is worth mentioning that any lower bound on the minimax probability (including the
bounds of Theorems 1 and 2) leads to a lower bound
the minimax
which is based on a simple
? on
? risk,
1
application of the Markov?s inequality: EP n ?Qm sn,m
? |An,m |
P n ? Qm {|An,m | sn,m }.
The following result (proved in Section 3.2) is the main contribution of this work. It provides a
minimax lower bound for the problem of MMD estimation, which holds for general radial universal
kernels. In contrast to Theorem 1, it avoids the superfluous dependence on d and depends only on the
properties of k while exhibiting the correct rate.
Theorem 2. Let P be the set of all Borel probability measures over Rd with continuously infinitely
differentiable densities. Let k be a radial kernel on Rd of the form (3), where ? is a bounded nonnegative measure on [0, 1). Assume that there exist 0 < t0 ? t1 < 1 and 0 < < 1 such that
?([t0 , t1 ])
. Then the following holds:
(
)
r
?
1
t
1
1
1
0
inf sup P n ? Qm MMDk (P, Q) F?n,m
max p , p
. (5)
?
20
t
e
14
n
m
Fn,m P,Q2P
1
Note that the existence of 0 < t0 ? t1 < 1 and 0 < < 1 such that ?([t0 , t1 ])
ensures that
supp(?) 6= {0} (i.e., the kernel is not a constant function), which implies k is universal. If k is a
Gaussian kernel with bandwidth parameter ? 2 > 0, it is easy to verify that t0 = t1 = (2? 2 ) 1 and
= 1 satisfy ?([t0 , t1 ])
as the Gaussian kernel is generated by ? = 1/(2?2 ) in (3), where x is
a Dirac measure supported at x. Therefore we obtain a dimension independent constant in (5) for
Gaussian kernels compared to the bound in (4).
3
Proofs
In this section, we present the proofs of Theorems 1 and 2. Before we present the proofs, we first
introduce the setting of nonparametric estimation. Let F : ? ! R be a functional defined on a
measurable space ? and P? = {P? : ? 2 ?} be a family of probability distributions indexed by ?
and defined over a measurable space X associated with data. We observe the data D 2 X distributed
according to an unknown element P? 2 P? and the goal is to estimate F (?). Usually X , D, and P?
will depend on sample size n. Let F?n := F?n (D) be an estimator of F (?) based on D. The following
well known result [22, Theorem 2.2] provides a lower bound on the minimax probability of this
problem. We refer the reader to Appendix A for a proof of its more general version.
Theorem 3. Assume there exist ?0 , ?1 2 ? such that |F (?0 )
F (?1 )|
2s > 0 and
KL(P?1 kP?0 ) ? ? with 0 < ? < 1. Then
!
p
n
o
?/2
1 ? 1
?
inf sup P? |Fn (D) F (?)| s
max
e ,
,
4
2
F?n ?2?
?
?
R
dP
where KL(P?1 kP?0 ) := log dP??1 dP?1 denotes the Kullback-Leibler divergence between P?1
0
and P?0 .
The above result (also called the Le Cam?s method) provides the recipe for obtaining minimax lower
bounds, where the goal is to construct two hypotheses ?0 , ?1 2 ? such that (i) F (?0 ) and F (?1 ) are
far apart, while (ii) the corresponding distributions, P?0 and P?1 are close enough. The requirement
(i) can be relaxed by introducing two random (fuzzy) hypotheses ?0 , ?1 2 ?, and requiring F (?0 )
4
and F (?1 ) to be far apart with high probability. This weaker requirement leads to a lower bounding
technique, called the method of two fuzzy hypotheses. This method is captured by the following
theorem [22, Theorem 2.14] and is commonly used to derive lower bounds on the minimax risk in
the problem of estimation of functionals [22, Section 2.7.4].
Theorem 4. Let ?0 and ?1 be any probability distributions over ?. Assume that
1. There exist c 2 R, s > 0, 0 ?
?1 ? : F (?) c + 2s
1
1.
0,
1
< 1 such that ?0 ? : F (?) ? c
? a
dP
2. There exist ? > 0 and 0 < ? < 1 such that P1 dP01
Z
Pi (D) = P? (D)?i (d?),
?
?
1
1
0
and
?, where
i 2 {0, 1}
and Pa0 is the absolutely continuous component of P0 with respect to P1 .
Then
n
inf sup P? |F?n (D)
F?n ?2?
F (?)|
s
o
? (1
?
1)
1+?
0
.
With this set up and background, we are ready to prove Theorems 1 and 2.
3.1
Proof of Theorem 1
The proof is based on Theorem 3 and treats two cases m n and m < n separately. We consider
only the case m n as the second one follows the same steps. Let Gd denote a class of multivariate
Gaussian distributions over Rd with covariance matrices proportional to identity matrix Id 2 Rd?d .
In our case Gd ? P, which leads to the following lower bound for any s > 0:
n
o
n
o
sup P n ?Qm MMDk (P, Q) F?n,m
s
sup P n ?Qm MMDk (P, Q) F?n,m
s .
P,Q2P
P,Q2Gd
? Given
Note that every element G(?, Id ) 2 Gd is indexed by a pair (?, 2 ) 2 Rd ? (0, 1) =: ?.
two elements P, Q 2 Gd , the data is distributed according to P n ? Qm . This brings us into the
? ? ?,
? X := (Rd )n+m , P? := Gn ? Gm for ? = (??1 , ??2 ) 2 ?
context of Theorem 3 with ? := ?
1
2
? respectively, and
with Gaussian distributions G1 and G2 corresponding to parameters ??1 , ??2 2 ?
F (?) = MMDk (G1 , G2 ).
2
In order to apply Theorem 3 we need to choose two probability distributions P?0 and P?1 . We define
four different d-dimensional Gaussian distributions:
P0 = G(?P
0,
with
2
=
c1 ? 2 ?
n?
2+
,
d
m
2
Id ),
2
k?P
0k =
Q0 = G(?Q
0 ,
c2 ? 2
d
?
1
1
+
n m
2
?
Id ),
,
P1 = Q1 = G(0,
2
k?Q
0 k =
c2 ? 2
,
dm
2
k?P
0
Id )
2
?Q
0 k =
c3 ? 2
,
dn
where c1 , c2 , c3 > 0 are positive constants independent
of m and n to be specified later. Note that
q
p
p
1
this construction is possible as long as cn3 ? c2 n1 + m
+ cm2 , which is clearly satisfied if
c3 ? c2 .
First we will check the upper bound on the KL divergence between the distributions. Using the chain
rule of KL divergence and its closed form expression for Gaussian distributions we write
n
m
KL(P1n ? Qm
1 kP0 ? Q0 ) = n ?
1
1
2
2
c2 ? 2 n1 + m
c2 ? 2 m
k?P
k?Q
0k
0 k
+
m
?
=
n
?
+
m
?
n
2 2
2 2
2c1 ? 2 2 + m
2c1 ? 2 2 +
n
2+ m
c2
= c2
=
.
n
2c1
2c1 2 + m
n
m
Next we need to lower bound an absolute value between MMDk (P0 , Q0 ) and MMDk (P1 , Q1 ). Note
that
|MMDk (P0 , Q0 ) MMDk (P1 , Q1 )| = MMDk (P0 , Q0 ).
(6)
5
Using a closed-form expression for the MMD between Gaussian distribution [21, Eq. 25] we write
!!
?
?d/2
2
?2
k?P
?Q
2
0
0 k
MMDk (P0 , Q0 ) = 2
1 exp
.
?2 + 2 2
2? 2 + 4 2
Assume
Using 1
e
x
2
k?P
?Q
0
0 k
? 1.
2
2
2? + 4
x/2, which holds for x 2 [0, 1], we write
|MMDk (P0 , Q0 )
Since m
n and (1
d
d + 2c1 2 +
n
m
!d4
d
MMDk (P1 , Q1 )|
1 x 1
x)
?
d + 2c1 2 +
monotonically decreases to e
d
d + 6c1
?d4
=
?
1
Using this and setting c3 = c2 we get
|MMDk (P0 , Q0 )
(7)
MMDk (P1 , Q1 )|
1
p e
n
3c1
2
for x
1
1
1 + d/(6c1 )
r
n
m
!d/4 s
1, we have
! 6cd1 ? d4
?(1+d/(6c1 )
c2
2d + 4c1 2 +
2
k?P
?Q
0
0 k
.
2? 2 + 4 2
n
m
1)
1
p e
n
3c1
2
r
e
3c1
2
.
c2
.
2d + 12c1
Now we set c1 = 0.16, c2 = 0.23. Checking that Condition (7) is satisfied and noting that
!
p
r
c2 /(4c1 )
1 2cc2 1
1
1 3c1 c2
1
1
1
2
max
e 1,
> ,
e
>
and
>
4
2
5
2
2
8
d + 6c1
d+1
we conclude the proof with an application of Theorem 3.
3.2
Proof of Theorem 2
First, we repeat the argument presented in the proof of Theorem 1 to bring ourselves into the context
of minimax estimation, introduced in the beginning of Section 3.1. Namely, we reduce the class of
distributions P to its subset Gd containing all the multivariate Gaussian distributions over Rd with
covariance matrices proportional to identity matrix Id 2 Rd?d . The proof is based on Theorem 4 and
treats two cases m n and m < n separately. We consider only the case m n as the second one
follows the same steps.
In order to apply Theorem 4 we need to choose two ?fuzzy hypotheses?, that is two probability
distributions ?0 and ?1 over ?. In our setting there is a one-to-one correspondence between
parameters ? 2 ? and pairs of Gaussian distributions (G1 , G2 ) 2 Gd ? Gd . Throughout the proof it
will be more convenient to treat ?0 and ?1 as distributions over Gd ? Gd . We will set ?0 to be a Dirac
measure supported on (P0 , Q0 ) with P0 = Q0 = G(0, 2 Id ). Clearly, MMDk (P0 , Q0 ) = 0. This
gives
?0 ? : F (?) = 0 = 1
and the first inequality of Condition 1 in Theorem 4 holds with c = 0 and 0 = 0. Next we set ?1 to
be a distribution of a random pair (P, Q) with
Q = Gd (0,
2
Id ),
P = Gd (?,
2
Id ),
2
=
1
,
2t1 d
where ? ? P? for some probability distribution P? over Rd to be specified later. Next we are going to
check Condition 2 of Theorem 4. For D = (x1 , . . . , xn , y1 , . . . , ym ) define ?posterior? distributions
Z
Pi (D) = P? (D)?i (d?), i 2 {0, 1}
as in Theorem 4. Using Markov?s inequality we write
?
?
?
dP0
dP1
P1
< ? = P1
>?
dP1
dP0
6
1
?
? ? E1
?
dP1
.
dP0
(8)
We have
dP1
(D) =
dP0
R
Rd
Qn
j=1 e
Qn
?k2
2 2
kxj
j=1 e
kxj k2
2 2
Qm
k=1 e
Qm
k=1 e
kyk k2
2 2
dP? (?)
kyk k2
2 2
=
Z
Rd
e
nk?k2
2 2
e
h
Pn
j=1 xj ,?i
2
dP? (?).
Now we compute the expected value appearing in (8):
?
Z
h Pn
i
nk?k2
2
dP1
ED?P1
(D) =
e 2 2 ED?P1 eh j=1 xj , ?i/
dP? (?)
dP0
Rd
?Z
?
?
Z
DP
E
0
n
1
nk?k2
Xj? , ?
0
2
j=1
2
=
e 2
dP? (? ) dP? (?),
E e
Rd
0
(9)
Rd
0
where X1? , . . . , Xn? are independent and distributed
according
to Gd (?0 , 2 Id ). Note that
DP
E
0
0
Pn
n
?
?
0
2
0
2
Id ) and as a result
k?k2 . Using
j=1 Xj ? Gd (n? , n
j=1 Xj , ? ? G nh? , ?i, n
the closed form for the moment generating function of a Gaussian distribution Z ? G(?,
? ?
1 2 2
E etZ = e?t e 2 t , we get
?
DP
E
?0
n
1
nh?0 ,?i
nk?k2
j=1 Xj , ?
=e 2 e 2 2 .
E e 2
Together with (9) this gives
?
Z
dP1
ED?P1
(D) =
e
dP0
Rd
nk?k2
2 2
?Z
Rd
e
nh?0 ,?i
2
e
nk?k2
2 2
0
?
?
dP? (? ) dP? (?) = E e
nh?0 ,?i
2
2
),
, (10)
where ? and ?0 are independent random variables both distributed according to P? . Now we set P?
to be a uniform distribution in the d-dimensional cube of appropriate size
h
id
p
p
P? := U
c1 / dnt1 , c1 / dnt1 .
In this case, using Lemma B.1 presented in Appendix B we get
?
?
?
? ?
?d
d
d
Y
Y
n?i ?0i
nh?0 ,?i
dn 2 t1
n c21
1
2
2
2
E e
=
E e
=
Shi
=
Shi 2c1
.
2 dnt
2nc21
4c21
1
i=1
i=1
Using (10) and also assuming
1
Shi 2c21 ? 1
4c21
we get
(11)
?
dP1
1
(D) ? 2 Shi 2c21 .
dP0
4c1
?
?
?
dP0
?
2
0
Combining with (8) we finally get P1 dP
dP1 < ? ? 4c2 Shi 2c1 or equivalently P1 dP1
ED?P1
1
?
Shi
4c21
1
2c21 . This shows that Condition 2 of Theorem 4 is satisfied with ? =
?
Shi
4c21
?
?
2c21 .
Finally, we need to check the second inequality of Condition 1 in Theorem 4. Take two Gaussian
distributions P = Gd (?, 2 Id ) and Q = Gd (0, 2 Id ). Using [21, Eq. 30] we have
?
?
t0
2
2
MMDk (P, Q)
1
k?k2
e
2+d
given
1
2
=
and t1 k?k2 ? 1 + 4t1 2 .
(12)
2t1 d
p
Notice that the largest diagonal of a d-dimensional cube scales as d. Using this we conclude that
c2
for ? ? P? with probability 1 it holds that k?k2 ? t11n and the second condition in (12) holds as
long as c21 ? n. Using this we get for any c2 > 0
(
)
r
?
?
?
t0
c22 2 + d
2
MMDk (P, Q) c2
k?k
.
(13)
P
P
t1 en
t1 n
d
??P?
(P,Q)??1
7
Pd
Note that for ? ? P? , k?k2 =
computations show that
Ek?k2 =
d
X
i=1
E?2i = d
i=1
?2i is a sum of d i.i.d. bounded random variables. Also simple
c21
c2
= 1
3dnt1
3nt1
Vk?k2 =
and
d
X
V?2i =
i=1
4c41
.
45dn2 t21
Using Chebyshev-Cantelli?s inequality of Theorem B.2 (Appendix B) we get for any ? > 0
P
??P?
k?k2
Ek?k2
or equivalently for any ? > 0,
?
k?k2
P
??P?
Choosing ? ?
P
(P,Q)??1
(
p
5
2
p
9 5
2
? ?2
MMDk (P, Q)
c2
c1
c2
k?k2 >
P
? =1
??P?
c21
?
1
3
2?
p
3 5d
?
Ek?k2 + ?
1
nt1
1
1
1+
45dn2 t21 2
?
4c41
1
.
1 + ?2
1
, we can further lower bound (13):
r
t0
t1 en
)
P
??P?
?
k?k2
p
We finally set ? = 0.4, c1 = 0.8, c2 = 0.1, ? = 25
and the second condition of (12) are satisfied, while
?
? 1 4c? 2 Shi 2c21
1
1+?
c21
p
9 5
2
1
1+?2
?
1
3
? ?2
?
c2
c1
>
2?
p
3 5d
?
1
nt1
1
1
.
1 + ?2
, and check that inequality (11)
1
.
14
We complete the proof by application of Theorem 4.
4
Discussion
In this paper, we provided the first known lower bounds for the estimation of maximum mean
discrepancy (MMD) based on finite random samples. Based on this result, we established the minimax
rate optimality of the empirical estimator. Interestingly, we showed that for radial kernels on Rd , the
optimal speed of convergence depends only on the properties of the kernel and is independent of d.
However, the paper does not address an important question about the minimax rates for MMD based
tests. We believe that the minimax rates of testing with MMD matches with that of the minimax rates
for MMD estimation and we intend to build on this work in future to establish minimax testing results
involving MMD.
Since MMD is an integral probability metric (IPM) [11], a related problem of interest is the minimax
estimation of IPMs.
R IPM is a class of distances on probability measures, which is defined as
(P, Q) := sup{ f (x) d(P
Q)(x) : f 2 F}, where F is a class of bounded measurable
functions on a topological space X with P and Q being Borel probability measures. It is well known
[16] that the choice of F = {f 2 H : kf kH ? 1} yields MMDk (P, Q) where H is a reproducing
kernel Hilbert space with a bounded reproducing kernel k. [16] studied the empirical estimation
of (P, Q) for various choices of F and established the consistency and convergence rates for the
empirical estimator. However, it remains an open question as to whether these rates are minimax
optimal.
References
[1] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer Academic Publishers, London, UK, 2004.
[2] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory
of Independence. Oxford University Press, 2013.
8
[3] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?lkopf. Kernel measures of conditional dependence.
In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information
Processing Systems 20, pages 489?496, Cambridge, MA, 2008. MIT Press.
[4] K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes? rule: Bayesian inference with positive
definite kernels. J. Mach. Learn. Res., 14:3753?3783, 2013.
[5] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?lkopf, and A. Smola. A kernel method for the
two sample problem. In B. Sch?lkopf, J. Platt, and T. Hoffman, editors, Advances in Neural
Information Processing Systems 19, pages 513?520, Cambridge, MA, 2007. MIT Press.
[6] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?lkopf, and A. J. Smola. A kernel two-sample
test. Journal of Machine Learning Research, 13:723?773, 2012.
[7] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?lkopf, and A. J. Smola. A kernel statistical
test of independence. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in
Neural Information Processing Systems 20, pages 585?592. MIT Press, 2008.
[8] E. L. Lehmann and G. Casella. Theory of Point Estimation. Springer-Verlag, New York, 2008.
[9] D. Lopez-Paz, K. Muandet, B. Sch?lkopf, and I. Tolstikhin. Towards a learning theory of causeeffect inference. In Proceedings of the 32nd International Conference on Machine Learning,
ICML 2015, Lille, France, 6-11 July 2015, 2015.
[10] K. Muandet, B. Sriperumbudur, K. Fukumizu, A. Gretton, and B. Sch?lkopf. Kernel mean
shrinkage estimators. Journal of Machine Learning Research, 2016. To appear.
[11] A. M?ller. Integral probability metrics and their generating classes of functions. Advances in
Applied Probability, 29:429?443, 1997.
[12] I. J. Schoenberg. Metric spaces and completely monotone functions. The Annals of Mathematics,
39(4):811?841, 1938.
[13] A. J. Smola, A. Gretton, L. Song, and B. Sch?lkopf. A Hilbert space embedding for distributions.
In Proceedings of the 18th International Conference on Algorithmic Learning Theory (ALT),
pages 13?31. Springer-Verlag, 2007.
[14] L. Song, A. Smola, A. Gretton, J. Bedo, and K. Borgwardt. Feature selection via dependence
maximization. Journal of Machine Learning Research, 13:1393?1434, 2012.
[15] L. Song, X. Zhang, A. Smola, A. Gretton, and B. Sch?lkopf. Tailoring density estimation via
reproducing kernel moment matching. In Proceedings of the 25th International Conference on
Machine Learning, ICML 2008, pages 992?999, 2008.
[16] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Sch?lkopf, and G. R. G. Lanckriet. On the
empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550?
1599, 2012.
[17] B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels
and RKHS embedding of measures. J. Mach. Learn. Res., 12:2389?2410, 2011.
[18] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?lkopf, and G. R. G. Lanckriet. Hilbert
space embeddings and metrics on probability measures. J. Mach. Learn. Res., 11:1517?1561,
2010.
[19] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[20] Z. Szab?, A. Gretton, B. P?czos, and B. K. Sriperumbudur. Two-stage sampled learning
theory on distributions. In Proceedings of the Eighteenth International Conference on Artificial
Intelligence and Statistics, volume 38, pages 948?957. JMLR Workshop and Conference
Proceedings, 2015.
[21] I. Tolstikhin, B. Sriperumbudur, and K. Muandet. Minimax estimation of kernel mean embeddings. arXiv:1602.04361 [math.ST], 2016.
[22] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer, NY, 2008.
9
| 6483 |@word version:1 norm:2 seems:1 nd:1 open:2 cm2:1 covariance:2 p0:11 q1:5 thereby:3 mention:1 ipm:2 moment:2 rkhs:6 interestingly:1 past:1 existing:1 universality:1 nt1:3 fn:3 tailoring:1 intelligence:1 kyk:2 beginning:1 provides:5 math:1 c22:1 zhang:1 dn:2 constructed:2 c2:24 lopez:1 consists:1 prove:1 introduce:1 manner:1 expected:1 indeed:1 behavior:2 mpg:2 p1:15 kp0:1 spain:1 estimating:3 underlying:1 bounded:6 notation:2 provided:2 etz:1 fuzzy:5 developed:1 finding:1 guarantee:2 every:2 bedo:1 demonstrates:1 qm:13 k2:23 berlinet:1 uk:1 platt:3 appear:1 positive:5 t1:14 before:1 treat:3 mach:3 id:14 oxford:1 establishing:3 lugosi:1 studied:1 mentioning:1 statistically:1 c21:14 testing:5 practice:1 definite:3 universal:10 empirical:13 convenient:1 matching:1 radial:12 get:8 cannot:1 close:1 selection:2 risk:2 context:2 measurable:3 demonstrated:1 shi:8 eighteenth:1 attention:1 estimator:25 rule:3 embedding:8 notion:3 schoenberg:2 annals:1 construction:2 gm:1 hypothesis:7 lanckriet:3 pa:1 element:6 ep:1 ensures:2 sun:1 decrease:1 removed:1 yk:1 mentioned:1 pd:1 rigorously:1 cam:3 depend:3 completely:1 kxj:2 chapter:3 various:2 london:1 monte:1 kp:2 artificial:1 choosing:1 refined:1 supplementary:1 valued:2 statistic:10 g1:3 differentiable:3 t21:2 combining:1 roweis:2 kh:9 dirac:2 recipe:1 convergence:3 requirement:2 generating:2 object:1 derive:1 develop:2 op:2 eq:2 christmann:1 implies:2 exhibiting:1 rasch:2 closely:1 correct:1 owing:1 bks18:1 cn3:1 generalization:1 alleviate:1 proposition:1 extension:1 hold:10 considered:1 normal:2 exp:2 mapping:1 algorithmic:1 estimation:33 currently:1 teo:1 largest:1 establishes:1 hoffman:1 fukumizu:7 q2p:4 mit:3 clearly:2 gaussian:19 ck:6 pn:10 asp:1 shrinkage:5 focus:2 improvement:1 vk:1 check:4 cantelli:1 contrast:1 rigorous:1 inference:4 hidden:1 relation:2 koller:2 going:1 france:1 interested:3 germany:2 issue:1 aforementioned:1 cube:2 construct:1 psu:1 park:1 lille:1 icml:2 discrepancy:4 future:1 intelligent:2 divergence:3 ourselves:1 n1:4 interest:2 tolstikhin:3 superfluous:2 chain:1 accurate:2 integral:3 indexed:2 euclidean:1 re:3 causal:1 gn:1 maximization:1 applicability:1 introducing:1 subset:2 uninteresting:1 uniform:1 paz:1 straightforwardly:1 dp0:8 combined:1 gd:15 muandet:3 density:5 borgwardt:3 international:4 st:1 ym:1 ilya:2 continuously:2 together:1 satisfied:4 containing:1 choose:2 ek:3 supp:3 nonasymptotic:1 de:2 dnt:1 satisfy:2 depends:3 later:2 lot:1 closed:3 sup:8 bayes:2 contribution:3 ni:5 characteristic:1 yield:3 lkopf:12 bayesian:1 carlo:1 worth:1 dp1:9 bharath:1 casella:1 ed:4 sriperumbudur:7 james:1 dm:1 associated:2 proof:15 sampled:1 proved:2 popular:2 knowledge:1 dimensionality:1 hilbert:6 wherein:2 improved:1 though:1 smola:6 stage:1 until:1 steinwart:1 brings:1 infimum:1 believe:1 usa:1 verify:1 requiring:1 q0:11 leibler:1 boucheron:1 deal:1 mpi:2 d4:3 generalized:1 complete:1 bring:1 novel:1 recently:1 functional:1 empirically:1 nh:5 volume:1 extend:1 discussed:2 kluwer:1 refer:3 cambridge:2 rd:27 consistency:1 mathematics:1 yk2:2 multivariate:2 posterior:1 recent:1 showed:3 inf:5 apart:2 scenario:1 verlag:2 inequality:7 yi:4 seen:2 captured:2 relaxed:1 employed:3 ller:1 monotonically:1 july:1 ii:2 gretton:12 match:3 faster:2 academic:1 long:2 e1:1 variant:3 regression:1 involving:2 metric:5 arxiv:1 pa0:1 kernel:51 mmd:21 c1:28 addition:1 background:1 separately:2 publisher:1 sch:12 massart:1 noting:1 easy:2 enough:1 embeddings:2 independence:7 xj:6 pennsylvania:1 bandwidth:2 reduce:1 idea:3 chebyshev:1 shift:1 t0:9 whether:2 expression:2 song:5 bingen:2 york:1 remark:1 detailed:1 involve:2 clear:1 tkx:1 nonparametric:4 stein:1 tsybakov:1 outperform:1 exist:4 notice:1 estimated:1 discrete:2 write:4 mat:1 four:1 drawn:2 monotone:1 sum:1 inverse:1 lehmann:1 family:1 reader:3 throughout:1 ipms:1 electronic:1 appendix:4 bound:27 correspondence:1 c41:2 topological:2 nonnegative:1 precisely:1 speed:1 argument:5 optimality:7 separable:1 department:3 according:4 b:1 intuitively:1 invariant:1 taken:1 remains:1 turn:1 singer:2 apply:2 observe:1 appropriate:1 appearing:1 altogether:1 existence:1 thomas:1 denotes:2 include:2 build:1 establish:2 classical:1 intend:1 question:3 concentration:1 dependence:9 diagonal:1 dp:17 distance:7 argue:1 collected:1 tuebingen:2 assuming:2 equivalently:2 difficult:1 negative:2 unknown:1 upper:4 markov:2 finite:7 y1:1 rn:4 reproducing:6 sharp:1 cc2:1 introduced:1 pair:3 namely:1 kl:5 c3:4 specified:2 established:3 barcelona:1 nip:1 address:1 usually:2 agnan:1 max:6 including:1 suitable:1 eh:1 mn:1 minimax:40 numerous:1 ready:1 sn:2 discovery:1 checking:1 kf:1 embedded:1 highlight:2 proportional:2 remarkable:1 consistent:1 editor:3 pi:2 translation:1 twosample:1 supported:2 repeat:1 czos:1 allow:1 weaker:1 wide:2 absolute:1 distributed:4 dimension:1 xn:2 avoids:1 qn:2 dn2:2 made:1 commonly:2 employing:1 far:2 functionals:3 bernhard:1 kullback:1 rid:1 conclude:2 xi:7 continuous:3 decade:1 learn:3 obtaining:1 main:5 bounding:2 p1n:1 x1:2 en:2 borel:8 ny:1 sub:4 fails:1 jmlr:1 theorem:42 covariate:1 alt:1 exists:1 workshop:1 gained:1 kx:2 nk:6 cd1:1 infinitely:3 g2:3 springer:4 ma:2 conditional:2 goal:3 identity:2 towards:1 hard:1 infinite:2 szab:1 lemma:1 called:4 multiquadric:1 formally:1 support:1 absolutely:1 |
6,062 | 6,484 | Fast recovery from a union of subspaces
Chinmay Hegde
Iowa State University
Piotr Indyk
MIT
Ludwig Schmidt
MIT
Abstract
We address the problem of recovering a high-dimensional but structured vector
from linear observations in a general setting where the vector can come from an
arbitrary union of subspaces. This setup includes well-studied problems such as
compressive sensing and low-rank matrix recovery. We show how to design more
efficient algorithms for the union-of-subspace recovery problem by using approximate projections. Instantiating our general framework for the low-rank matrix
recovery problem gives the fastest provable running time for an algorithm with
optimal sample complexity. Moreover, we give fast approximate projections for 2D
histograms, another well-studied low-dimensional model of data. We complement
our theoretical results with experiments demonstrating that our framework also
leads to improved time and sample complexity empirically.
1
Introduction
Over the past decade, exploiting low-dimensional structure in high-dimensional problems has become
a highly active area of research in machine learning, signal processing, and statistics. In a nutshell,
the general approach is to utilize a low-dimensional model of relevant data in order to achieve
better prediction, compression, or estimation compared to a ?black box? treatment of the ambient
high-dimensional space. For instance, the seminal work on compressive sensing and sparse linear
regression has shown how to estimate a sparse, high-dimensional vector from a small number of
linear observations that essentially depends only on the small sparsity of the vector, as opposed to its
large ambient dimension. Further examples of low-dimensional models are low-rank matrices, groupstructured sparsity, and general union-of-subspaces models, all of which have found applications in
problems such as matrix completion, principal component analysis, compression, and clustering.
These low-dimensional models have a common reason for their success: they capture important
structure present in real world data with a formal concept that is suitable for a rigorous mathematical
analysis. This combination has led to statistical performance improvements in several applications
where the ambient high-dimensional space is too large for accurate estimation from a limited number
of samples. However, exploiting the low-dimensional structure also comes at a cost: incorporating
the structural constraints into the statistical estimation procedure often results in a more challenging
algorithmic problems. Given the growing size of modern data sets, even problems that are solvable
in polynomial time can quickly become infeasible. This leads to the following important question:
Can we design efficient algorithms that combine (near)-optimal statistical efficiency with good
computational complexity?
In this paper, we make progress on this question in the context of recovering a low-dimensional
vector from noisy linear observations, which is the fundamental problem underlying both low-rank
matrix recovery and compressive sensing / sparse linear regression. While there is a wide range of
algorithms for these problems, two approaches for incorporating structure tend to be most common:
(i) convex relaxations of the low-dimensional constraint such as the `1 - or the nuclear norm [19], and
(ii) iterative methods based on projected gradient descent, e.g., the IHT (Iterative Hard Thresholding)
or SVP (Singular Value Projection) algorithms [5, 15]. Since the convex relaxations are often also
solved with first order methods (e.g., FISTA or SVT [6]), the low-dimensional constraint enters both
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
approaches through a structure-specific projection or proximal operator. However, this projection
/ proximal operator is often computationally expensive and dominates the overall time complexity
(e.g., it requires a singular value decomposition for the low-rank matrix recovery problem).
In this work, we show how to reduce the computational bottleneck of the projection step by using
approximate projections. Instead of solving the structure-specific projection exactly, our framework
allows us to employ techniques from approximation algorithms without increasing the sample
complexity of the recovery algorithm. While approximate projections have been used in prior work,
our framework is the first to yield provable algorithms for general union-of-subspaces models (such
as low-rank matrices) that combine better running time with no loss in sample complexity compared
to their counterparts utilizing exact projections. Overall, we make three contributions:
1. We introduce an algorithmic framework for recovering vectors from linear observations
given an arbitrary union-of-subspaces model. Our framework only requires approximate
projections, which leads to recovery algorithms with significantly better time complexity.
2. We instantiate our framework for the well-studied low-rank matrix recovery problem, which
yields a provable algorithm combining the optimal sample complexity with the best known
time complexity for this problem.
3. We also instantiate our framework for the problem of recovering 2D-histograms (i.e.,
piecewise constant matrices) from linear observations, which leads to a better empirical
sample complexity than the standard approach based on Haar wavelets.
Our algorithmic framework generalizes recent results for structured sparse recovery [12, 13] and
shows that approximate projections can be employed in a wider context. We believe that these
notions of approximate projections are useful in further constrained estimation settings and have
already obtained preliminary results for structured sparse PCA. For conciseness, we focus on the
union-of-subspaces recovery problem in this paper.
Outline of the paper. In Section 2, we formally introduce the union-of-subspaces recovery problem
and state our main results. Section 3 then explains our algorithmic framework in more detail
and Section 4 instantiates the framework for low-rank matrix recovery. Section 5 concludes with
experimental results. Due to space constraints, we address our results for 2D histograms mainly in
Appendix C of the supplementary material.
2
Our contributions
We begin by defining our problem of interest. Our goal is to recover an unknown, structured vector
?? 2 Rd from linear observations of the form
y = X?? + e ,
(1)
where the vector y 2 Rn contains the linear observations / measurements, the matrix X 2 Rn?d is
the design / measurement matrix, and the vector e 2 Rn is an arbitrary noise vector. The formal goal
is to find an estimate ?? 2 Rd such that k?? ?? k2 ? C ? kek2 , where C is a fixed, universal constant
and k?k2 is the standard `2 -norm (for notational simplicity, we omit the subscript on the `2 -norm in
the rest of the paper). The structure we assume is that the vector ?? belongs to a subspace model:
Definition 1 (Subspace model). A subspace model U is a set of linear subspaces. The set of vectors
associated with the subspace model U is M(U) = {? | ? 2 U for some U 2 U}.
A subspace model is a natural framework generalizing many of the low-dimensional data models
mentioned above. For example, the set of sparse vectors with s nonzeros can be represented with
d
d
s subspaces corresponding to the s possible sparse support sets. The resulting problem of
recovering ?? from observations of the form (1) then is the standard compressive sensing / sparse
linear regression problem. Structured sparsity is a direct extension of this formulation in which we
only include a smaller set of allowed supports, e.g., supports corresponding to group structures.
Our framework also includes the case where the union of subspaces is taken over an infinite set: we
can encode the low-rank matrix recovery problem by letting U be the set of rank-r matrix subspaces,
i.e., each subspace is given by a set of r orthogonal rank-one matrices. By considering the singular
2
value decomposition, it is easy to see that every rank-r matrix can be written as the linear combination
of r orthogonal rank-one matrices.
Next, we introduce related notation. For a linear subspace U of Rd , let PU 2 Rd?d be the orthogonal
projection onto U . We denote the orthogonal complement of the subspace U with U ? so that ? =
PU ?+PU ? ?. We extend the notion of adding subspaces (i.e., U +V = {u+v | u 2 U and v 2 V }) to
subspace models: the sum of two subspace models U and V is U V = {U +V | U 2 U and V 2 V}.
We denote the k-wise sum of a subspace model with k U = U U . . . U.
Finally, we introduce a variant of the well-known restricted isometry property (RIP) for subspace
models. The RIP is a common regularity assumption for the design matrix X that is often used in
compressive sensing and low-rank matrix recovery in order to decouple the analysis of algorithms
from concrete sampling bounds.1 Formally, we have:
Definition 2 (Subspace RIP). Let X 2 Rn?d , let U be a subspace model, and let
0. Then X
2
2
2
satisfies the (U, )-subspace RIP if for all ? 2 M(U) we have (1
)k?k ? kX?k ? (1 + )k?k .
2.1
A framework for recovery algorithms with approximate projections
Considering the problem (1) and the goal of estimating under the `2 -norm, a natural algorithm is
projected gradient descent with the constraint set M(U). This corresponds to iterations of the form
??i+1
PU (??i
? ? X T (X ??i
y))
(2)
where ? 2 R is the step size and we have extended our notation so that PU denotes a projection onto
the set M(U). Hence we require an oracle that projects an arbitrary vector b 2 Rd into a subspace
model U, which corresponds to finding a subspace U 2 U so that kb PU bk is minimized. Recovery
algorithms of the form (2) have been proposed for various instances of the union-of-subspaces
recovery problem and are known as Iterative Hard Thresholding (IHT) [5], model-IHT [1], and
Singular Value Projection (SVP) [15]. Under regularity conditions on the design matrix X such as the
RIP, these algorithms find accurate estimates ?? from an asymptotically optimal number of samples.
However, for structures more complicated than plain sparsity (e.g., group sparsity or a low-rank
constraint), the projection oracle is often the computational bottleneck.
To overcome this barrier, we propose two complementary notions of approximate subspace projections.
2
2
2
Note that for an exact projection, we have that kbk = kb PU bk + kPU bk . Hence minimizing the
?tail? error kb PU bk is equivalent to maximizing the ?head? quantity kPU bk. Instead of minimizing /
maximizing these quantities exactly, the following definitions allow a constant factor approximation:
Definition 3 (Approximate tail projection). Let U and UT be subspace models and let cT
0. Then
T : Rd ! UT is a (cT , U, UT )-approximate tail projection if the following guarantee holds for all
b 2 Rd : The returned subspace U = T (b) satisfies kb PU bk ? cT kb PU bk.
Definition 4 (Approximate head projection). Let U and UH be subspace models and let cH > 0.
Then H : Rd ! UH is a (cH , U, UH )-approximate head projection if the following guarantee holds
for all b 2 Rd : The returned subspace U = H(b) satisfies kPU bk cH kPU bk.
It is important to note that the two definitions are distinct in the sense that a constant-factor head
approximation does not imply a constant-factor tail approximation, or vice versa (to see this, consider
a vector with a very large or very small tail error, respectively). Another feature of these definitions is
that the approximate projections are allowed to choose subspaces from a potentially larger subspace
model, i.e., we can have U ( UH (or UT ). This is a useful property when designing approximate
head and tail projection algorithms as it allows for bicriterion approximation guarantees.
We now state the main result for our new recovery algorithm. In a nutshell, we show that using both
notions of approximate projections achieves the same statistical efficiency as using exact projections.
As we will see in later sections, the weaker approximate projection guarantees allow us to design
algorithms with a significantly better time complexity than their exact counterparts. To simplify the
following statement, we defer the precise trade-off between the approximation ratios to Section 3.
1
Note that exact recovery from arbitrary linear observations is already an NP-hard problem in the noiseless
case, and hence regularity conditions on the design matrix X are necessary for efficient algorithms. While there
are more general regularity conditions such as the restricted eigenvalue property, we state our results here under
the RIP assumption in order to simplify the presentation of our algorithmic framework.
3
Theorem 5 (informal). Let H and T be approximate head and tail projections with constant
approximation ratios, and let the matrix X satisfy the ( c U, )-subspace RIP for a sufficiently large
constant c and a sufficiently small constant . Then there is an algorithm AS-IHT that returns an
estimate ?? such that k?? ?? k ? Ckek. The algorithm requires O(logk?k/kek) multiplications with
X and X T , and O(logk?k/kek) invocations of H and T .
Up to constant factors, the requirements on the RIP of X in Theorem 5 are the same as for exact
projections. As a result, our sample complexity is only affected by a constant factor through the use
of approximate projections, and our experiments in Section 5 show that the empirical loss in sample
complexity is negligible. Similarly, the number of iterations O(logk?k/kek) is also only affected by
a constant factor compared to the use of exact projections [5, 15]. Finally, it is worth mentioning that
using two notions of approximate projections is crucial: prior work in the special case of structured
sparsity has already shown that only one type of approximate projection is not sufficient for strong
recovery guarantees [13].
2.2
Low-rank matrix recovery
We now instantiate our new algorithmic framework for the low-rank matrix recovery problem.
Variants of this problem are widely studied in machine learning, signal processing, and statistics, and
are known under different names such as matrix completion, matrix sensing, and matrix regression.
As mentioned above, we can incorporate the low-rank matrix structure into our general union-ofsubspaces model by considering the union of all low-rank matrix subspaces. For simplicity, we state
the following bounds for the case of square matrices, but all our results also apply to rectangular
matrices. Formally, we assume that ?? 2 Rd is the vectorized form of a rank-r matrix ?? 2 Rd1 ?d1
where d = d21 and typically r ? d1 . Seminal results have shown that it is possible to achieve the
subspace-RIP for low-rank matrices with only n = O(r ? d1 ) linear observations, which can be much
smaller than the total dimensionality of the matrix d21 . However, the bottleneck in recovery algorithms
is often the singular value decomposition (SVD), which is necessary for both exact projections and
soft thresholding operators and has a time complexity of O(d31 ).
Our new algorithmic framework for approximate projections allows us to leverage recent results
on approximate SVDs. We show that it is possible to compute both head and tail projections for
e ? d2 ) time, which is significantly faster than the O(d3 ) time for an exact
low-rank matrices in O(r
1
1
SVD in the relevant regime where r ? d1 . Overall, we get the following result.
Theorem 6. Let X 2 Rn?d be a matrix with subspace-RIP for low-rank matrices, and let TX denote
the time to multiply a d-dimensional vector with X or X T . Then there is an algorithm that recovers
e X + r ? d2 ).
an estimate ?? such that k?? ?? k ? Ckek. Moreover, the algorithm runs in time O(T
1
In the regime where multiplication with the matrix X is fast, the time complexity of the projection
dominates the time complexity of the recovery algorithms. For instance, structured observations
e 2 ); see Appendix D for details. Here,
such as a subsampled Fourier matrix achieve TX = O(d
1
2
e
our algorithm runs in time O(r ? d1 ), which is the first provable running time faster than the O(d31 )
bottleneck given by a single exact SVD. While prior work has suggested the use of approximate SVDs
in low-rank matrix recovery [9], our results are the first that give a provably better time complexity
for this combination of projected gradient descent and approximate SVDs. Hence Theorem 6 can be
seen as a theoretical justification for the heuristic use of approximate SVDs.
Finally, we remark that Theorem 6 does not directly cover the low-rank matrix completion case
because the subsampling operator does not satisfy the low-rank RIP [9]. To clarify our use of
approximate SVDs, we focus on the RIP setting in our proofs, similar to recent work on low-rank
matrix recovery [7, 22]. We believe that similar results as for SVP [15] also hold for our algorithm,
and our experiments in Section 5 show that our algorithm works well for low-rank matrix completion.
2.3
2D-histogram recovery
Next, we instantiate our new framework for 2D-histograms, another natural low-dimensional model.
As before, we think of the vector ?? 2 Rd as a matrix ? 2 Rd1 ?d1 and assume the square case for
simplicity (again, our results also apply to rectangular matrices). We say that ? is a k-histogram if the
coefficients of ? can be described as k axis-aligned rectangles on which ? is constant. This definition
4
is a generalization of 1D-histograms to the two-dimensional setting and has found applications in
several areas such as databases and density estimation. Moreover, the theoretical computer science
community has studied sketching and streaming algorithms for histograms, which is essentially the
problem of recovering a histogram from linear observations. While the wavelet tree model with Haar
wavelets give the correct sample complexity of n = O(k log d) for 1D-histograms, the wavelet tree
approach incurs a suboptimal sample complexity of O(k log2 d) for 2D-histograms. It is possible
to achieve the optimal sample complexity O(k log d) also for 2D-histograms, but the corresponding
exact projection requires a complicated dynamic program (DP) with time complexity O(d51 k 2 ), which
is impractical for all but very small problem dimensions [18].
We design significantly faster approximate projection algorithms for 2D histograms. Our approach is
based on an approximate DP [18] that we combine with a Lagrangian relaxation of the k-rectangle
constraint. Both algorithms have parameters for controlling the trade-off between the size of the
output histogram, the approximation ratio, and the running time. As mentioned above, the bicriterion
nature of our approximate head and tail guarantees becomes useful here. In the following two
theorems, we let Uk be the subspace model of 2D histograms consisting of k-rectangles.
Theorem 7. Let ? > 0 and " > 0 be arbitrary. Then there is an (1 + ", Uk , Uc?k )-approximate tail
e 1+? ).
projection for 2D histograms where c = O(1/? 2 "). Moreover, the algorithm runs in time O(d
Theorem 8. Let ? > 0 and " > 0 be arbitrary. Then there is an (1 ", Uk , Uc?k )-approximate head
e 1+? ).
projection for 2D histograms where c = O(1/? 2 "). Moreover, the algorithm runs in time O(d
Note that both algorithms offer a running time that is almost linear, and the small polynomial gap to
a linear running time can be controlled as a trade-off between computational and statistical efficiency
(a larger output histogram requires more samples to recover). While we provide rigorous proofs for
the approximation algorithms as stated above, we remark that we do not establish an overall recovery
result similar to Theorem 6. The reason is that the approximate head projection is competitive
with respect to k-histograms, but not with the space Uk Uk , i.e., the sum of two k-histogram
subspaces. The details are somewhat technical and we give a more detailed discussion in Appendix
C.3. However, under a natural structural conjecture about sums of k-histogram subspaces, we obtain
a similar result as Theorem 6. Moreover, we experimentally demonstrate that the sample complexity
of our algorithms already improves over wavelets for k-histograms of size 32 ? 32.
Finally, we note that our DP approach also generalizes to -dimensional histograms for any constant
2. As the dimension of the histogram structure increases, the gap in sample complexity
between our algorithm and the prior wavelet-based approach becomes increasingly wide and scales
as O(k log d) vs O(k log d). For simplicity, we limit our attention to the 2D case described above.
2.4
Related work
Recently, there have been several results on approximate projections in the context of recovering
low-dimensional structured vectors. (see [12, 13] for an overview). While these approaches also work
with approximate projections, they only apply to less general models such as dictionary sparsity [12]
or structured sparsity [13] and do not extend to the low-rank matrix recovery problem we address.
Among recovery frameworks for general union-of-subspaces models, the work closest to ours is [4],
which also gives a generalization of the IHT algorithm. It is important to note that [4] addresses
approximate projections, but requires additive error approximation guarantees instead of the weaker
relative error approximation guarantees required by our framework. Similar to the structured sparsity
case in [13], we are not aware of any algorithms for low-rank or histogram projections that offer
additive error guarantees faster than an exact projection. Overall, our recovery framework can be
seen as a generalization of the approaches in [13] and [4].
Low-rank recovery has received a tremendous amount of attention over the past few years, so we
refer the reader to the recent survey [9] for an overview. When referring to prior work on low-rank
recovery, it is important to note that the fastest known running time for an exact low-rank SVD (even
for rank 1) of a d1 ? d2 matrix is O(d1 d2 min(d1 , d2 )). Several papers provide rigorous proofs for
low-rank recovery using exact SVDs and then refer to Lanczos methods such as PROPACK [16]
while accounting a time complexity of O(d1 d2 r) for a rank-r SVD. While Lanczos methods can be
faster than exact SVDs in the presence of singular value gaps, it is important to note that all rigorous
results for Lanczos SVDs either have a polynomial dependence on the approximation ratio or singular
5
value gaps [17, 20]. No prior work on low-rank recovery establishes such singular value gaps for
the inputs to the SVD subroutines (and such gaps would be necessary for all iterates in the recovery
algorithm). In contrast, we utilize recent work on gap-independent approximate SVDs [17], which
enables us to give rigorous guarantees for the entire recovery algorithm. Our results can be seen as
justification for the heuristic use of Lanczos methods in prior work.
The paper [2] contains an analysis of an approximate SVD in combination with an iterative recovery
algorithm. However, [2] only uses an approximate tail projection, and as a result the approximation
ratio cT must be very close to 1 in order to achieve a good sample complexity. Overall, this leads to a
time complexity that does not provide an asymptotic improvement over using exact SVDs.
Recently, several papers have analyzed a non-convex approach to low-rank matrix recovery via
factorized gradient descent [3, 7, 22?24]. While these algorithms avoid SVDs in the iterations of
the gradient method, the overall recovery proofs still require an exact SVD in the initialization step.
In order to match the sample complexity of our algorithm or SVP, the factorized gradient methods
require multiple SVDs for this initialization [7, 22]. As a result, our algorithm offers a better provable
time complexity. We remark that [7, 22] use SVP for their initialization, so combining our faster
version of SVP with factorized gradient descent might give the best overall performance.
As mentioned earlier, 1D and 2D histograms have been studied extensively in several areas such
as databases [8, 14] and density estimation. They are typically used to summarize ?count vectors?,
with each coordinate of the vector ? corresponding the number of items with a given value in some
data set. Computing linear sketches of such vectors, as well as efficient methods for recovering
histogram approximations from those sketches, became key tools for designing space efficient
dynamic streaming algorithms [10, 11, 21]. For 1D histograms it is known how to achieve the
optimal sketch length bound of n = O(k log d): it can be obtained by representing k-histograms
using a tree of O(k log d) wavelet coefficients as in [10] and then using the structured sparse recovery
algorithm of [1]. However, applying this approach to 2D histograms leads to a sub-optimal bound of
O(k log2 d).
3
An algorithm for recovery with approximate projections
We now introduce our algorithm for recovery from general subspace models using only approximate
projections. The pseudo code is formally stated in Algorithm 1 and can be seen as a generalization
of IHT [5]. Similar to IHT, we give a version without step size parameter here in order to simplify
the presentation (it is easy to introduce a step size parameter in order to fine-tune constant factors).
To clarify the connection with projected gradient descent as stated in Equation (2), we use H(b) (or
T (b)) as a function from Rd to Rd here. This function is then understood to be b 7! PH(b) b, i.e., the
orthogonal projection of b onto the subspace identified by H(b).
Algorithm 1 Approximate Subspace-IHT
1: function AS-IHT(y, X, t)
2:
??0
0
3:
for i
0, . . . , t do
4:
bi
X T (y X ??i )
i+1
5:
??
T (??i + H(bi ))
6:
return ?? ??t+1
The main difference to ?standard? projected gradient descent is that we apply a projection to both the
gradient step and the new iterate. Intuitively, the head projection ensures two points: (i) The result of
the head projection on bi still contains a constant fraction of the residual ?? ??i (see Lemma 13 in
Appendix A). (ii) The input to the tail approximation is close enough to the constraint set U so that
the tail approximation does not prevent the overall convergence. In a nutshell, the head projection
?denoises? the gradient so that we can then safely apply an approximate tail projection (as pointed
out in [13], only applying an approximate tail projection fails precisely because of ?noisy? updates).
Formally, we obtain the following theorem for each iteration of AS-IHT (see Appendix A.1 for the
corresponding proof):
6
Theorem 9. Let ??i be the estimate computed by AS-IHT in iteration i and let ri+1 = ?? ??i+1 be
the corresponding residual. Moreover, let U be an arbitrary subspace model. We also assume:
? y = X?? + e as in Equation (1) with ?? 2 M(U).
? T is a (cT , U, UT )-approximate tail projection.
? H is a (cH , U UT , UH )-approximate head projection.
? The matrix X satisfies the (U UT UH , )-subspace RIP.
Then the residual error of the next iterate, i.e., ri+1 = ?? ??i+1 satisfies
ri+1 ? ? ri + ?kek ,
!
?
?
q
p
?
?
0
0
where
? = (1 + cT )
+ 1 ?02 ,
? = (1 + cT ) p
+ 1+
,
1 ?02
p
?0 = cH (1
)
,
and
?0 = (1 + cH ) 1 + .
The important conclusion of Theorem 9 is that AS-IHT still achieves linear convergence when the
approximation ratios cT , cH are sufficiently close to 1 and the RIP-constant is sufficiently small.
For instance, our approximation algorithms for both low-rank matrices offer such approximation
guarantees. We can also achieve a sufficiently small value of by using a larger number of linear
observations in order to strengthen the RIP guarantee (see Appendix D). Hence the use of approximate
projections only affects the theoretical sample complexity bounds by constant factors. Moreover,
our experiments show that approximate projections achieve essentially the same empirical sample
complexity as exact projections (see Section 5).
Given sufficiently small / large constants cT , cH , and , it is easy to see that the linear convergence
implied by Theorem 9 directly gives the recovery guarantee and bound on the number of iterations
stated in Theorem 5 (see Appendix A.1). However, in some cases it might not be possible to design
approximation algorithms with constants cT and cH sufficiently close to 1 (in constrast, increasing
the sample complexity by a constant factor in order to improve is usually a direct consequence of
the RIP guarantee or similar statistical regularity assumptions). In order to address this issue, we
show how to ?boost? an approximate head projection so that the new approximation ratio is arbitrarily
close to 1. While this also increases the size of the resulting subspace model, this increase usually
affects the sample complexity only by constant factors as before. Note that for any fixed cT , setting
cH sufficiently close to 1 and sufficiently small leads to a convergence rate ? < 1 (c.f. Theorem 9).
Hence head boosting enables a linear convergence result for any initial combinations of cT and cH
while only increasing the sample complexity by a constant factor (see Appendix A.3). Formally, we
have the following theorem for head boosting, the proof of which we defer to Appendix A.2.
Theorem 10. Let H be a (cH , U, UH )-approximate head projection running in time O(T ), and let
" > 0. Then there is a constant c = c",cH that depends only on " and cH such that we can construct
a (1 ", U, c UH )-approximate head projection running in time O(c(T + T10 + T20 )) where T10 is
the time needed to apply a projection onto a subspace in c UH , and T20 is the time needed to find an
orthogonal projector for the sum of two subspaces in c UH .
We note that the idea of head boosting has already appeared in the context of structured sparse
recovery [13]. However, the proof of Theorem 10 is more involved because the subspace in a general
subspace model can have arbitrary angles (for structured sparsity, the subspaces are either parallel or
orthogonal in each coordinate).
4
Low-rank matrix recovery
We now instantiate our framework for recovery from a subspace model to the low-rank matrix
recovery problem. Since we already have proposed the top-level recovery algorithm in the previous
section, we only have to provide the problem-specific head and tail approximation algorithms here.
We use the following result from prior work on approximate SVDs.
Fact 11 ([17]). There is an algorithm A PPROX SVD with the following guarantee. Let A 2 Rd1 ?d2
be an arbitrary matrix, let r 2 N be the target rank, and let " > 0 be the desired accuracy. Then with
probability 1
, A PPROX SVD(A, r, ") returns an orthonormal set of vectors z1 , . . . , zr 2 Rd1
such that for all i 2 [r], we have
ziT AAT zi
2
i
7
? "
2
r+1
,
(3)
Matrix recovery
Matrix completion
200
Running time (sec)
Probability of recovery
1
0.8
0.6
0.4
0.2
Exact SVD
PROPACK
Krylov (1 iters)
Krylov (8 iters)
0
1
1.5
2
2.5
3
Oversampling ratio n/r(d1 + d2 )
150
100
PROPACK
LinearTimeSVD
Krylov (2 iters)
50
0
5
6
7
8
9
10
Oversampling ratio n/rd1
Figure 1: Left: Results for a low-rank matrix recovery experiment using subsampled Fourier measurements. SVP / IHT with one iteration of a block Krylov SVD achieves the same phase transition as
SVP with an exact SVD. Right: Results for a low-rank matrix completion problem. SVP / IHT with a
block Krylov SVD achieves the best running time and is about 4 ? 8 times faster than PROPACK.
where i is the i-th largest singular value of A. Furthermore, let Z 2 Rd1 ?r be the matrix with
columns zi . Then we also have
A ZZ T A F ? (1 + ")kA Ar kF ,
(4)
where
Frobenius-norm approximation
of A. Finally, the algorithm runs in time
? Ar is the best rank-r
?
d1 r 2 log2 (d2 / )
r 3 log3 (d2 / )
2/ )
p
O d1 d2 r log(d
+
+
.
"
"
"3/2
It is important to note that the above results hold for any input matrix and do not require singular value
gaps. The guarantee (4) directly gives a tail approximation guarantee for the subspace corresponding
to the matrix ZZ T A. Moreover, we can convert the guarantee (3) to a head approximation guarantee
(see Theorem 18 in Appendix B for details). Since the approximation " only enters the running time
in the approximate SVD, we can directly combine these approximate projections with Theorem 9,
which then yields Theorem 6 (see Appendix B.1 for details).2 Empirically, we show in the next
section that a very small number of iterations in A PPROX SVD already suffices for accurate recovery.
5
Experiments
We now investigate the empirical performance of our proposed algorithms. We refer the reader to
Appendix E for more details about the experiments and results for 2D histograms.
Considering our theoretical results on approximate projections for low-rank recovery, one important
empirical question is how the use of approximate SVDs such as [17] affects the sample complexity
of low-rank matrix recovery. For this, we perform a standard experiment and use several algorithms
to recover an image of the MIT logo from subsampled Fourier measurements (c.f. Appendix D). The
MIT logo has also been used in prior work [15, 19]; we use an image with dimensions 200 ? 133
and rank 6 (see Appendix E). We limit our attention here to variants of SVP because the algorithm
has good empirical performance and has been used as baseline in other works on low-rank recovery.
Figure 1 shows that SVP / IHT combined with a single iteration of a block Krylov SVD [17] achieves
the same phase transition as SVP with exact SVDs. This indicates that the use of approximate
projections for low-rank recovery is not only theoretically sound but can also lead to practical
algorithms. In Appendix E we also show corresponding running time results demonstrating that the
block Krylov SVD also leads to the fastest recovery algorithm.
We also study the performance of approximate SVDs for the matrix completion problem. We generate
a symmetric matrix of size 2048 ? 2048 with rank r = 50 and observe a varying number of entries
of the matrix. The approximation errors of the various algorithms are again comparable and reported
in Appendix E. Figure 1 shows the resulting running times for several sampling ratios. Again,
SVP combined with a block Krylov SVD [17] achieves the best running time. Depending on the
oversampling ratio, the block Krylov approach (now with two iterations) is 4 to 8 times faster than
SVP with PROPACK.
2
We remark that our definitions require head and tail projections to be deterministic, while the approximate
SVD is randomized. However, the running time of A PPROX SVD depends only logarithmically on the failure
probability, and it is straightforward to apply a union bound over all iterations of AS-IHT. Hence we ignore
these details here to simplify the presentation.
8
References
[1] Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, and Chinmay Hegde. Model-based compressive
sensing. IEEE Transactions on Information Theory, 56(4):1982?2001, 2010.
[2] Stephen Becker, Volkan Cevher, and Anastasios Kyrillidis. Randomized low-memory singular value
projection. In SampTA (Conference on Sampling Theory and Applications), 2013.
[3] Srinadh Bhojanapalli, Anastasios Kyrillidis, and Sujay Sanghavi. Dropping convexity for faster semidefinite optimization. arXiv preprint 1509.03917, 2015.
[4] Thomas Blumensath. Sampling and reconstructing signals from a union of linear subspaces. IEEE
Transactions on Information Theory, 57(7):4660?4671, 2011.
[5] Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for compressive sensing. Applied
and Computational Harmonic Analysis, 27(3):265?274, 2009.
[6] Jian-Feng Cai, Emmanuel J. Cand?s, and Zuowei Shen. A singular value thresholding algorithm for matrix
completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010.
[7] Yudong Chen and Martin J. Wainwright. Fast low-rank estimation by projected gradient descent: General
statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
[8] Graham Cormode, Minos Garofalakis, Peter J. Haas, and Chris Jermaine. Synopses for massive data:
Samples, histograms, wavelets, sketches. Foundations and Trends in Databases, 4(1?3):1?294, 2012.
[9] Mark Davenport and Justin Romberg. An overview of low-rank matrix recovery from incomplete observations. arXiv preprint 1601.06422, 2016.
[10] Anna C. Gilbert, Sudipto Guha, Piotr Indyk, Yannis Kotidis, S. Muthukrishnan, and Martin J. Strauss. Fast,
small-space algorithms for approximate histogram maintenance. In STOC, 2002.
[11] Anna C. Gilbert, Yannis Kotidis, S Muthukrishnan, and Martin J. Strauss. Surfing wavelets on streams:
One-pass summaries for approximate aggregate queries. In VLDB, volume 1, pages 79?88, 2001.
[12] Raja Giryes and Deanna Needell. Greedy signal space methods for incoherence and beyond. Applied and
Computational Harmonic Analysis, 39(1):1 ? 20, 2015.
[13] Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. Approximation algorithms for model-based compressive sensing. IEEE Transactions on Information Theory, 61(9):5129?5147, 2015.
[14] Yannis Ioannidis. The history of histograms (abridged). In Proceedings of the 29th international conference
on Very large data bases-Volume 29, pages 19?30. VLDB Endowment, 2003.
[15] Prateek Jain, Raghu Meka, and Inderjit S. Dhillon. Guaranteed rank minimization via singular value
projection. In NIPS, 2010.
[16] Rasmus M. Larsen. Propack. http://sun.stanford.edu/~rmunk/PROPACK/.
[17] Cameron Musco and Christopher Musco. Randomized block Krylov methods for stronger and faster
approximate singular value decomposition. In NIPS, 2015.
[18] S. Muthukrishnan, Viswanath Poosala, and Torsten Suel. On rectangular partitionings in two dimensions:
Algorithms, complexity and applications. In ICDT, pages 236?256, 1999.
[19] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear
matrix equations via nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[20] Yousef Saad. On the rates of convergence of the Lanczos and the block-Lanczos methods. SIAM Journal
on Numerical Analysis, 17(5):687?706, 1980.
[21] Nitin Thaper, Sudipto Guha, Piotr Indyk, and Nick Koudas. Dynamic multidimensional histograms. In
SIGMOD, 2002.
[22] Stephen Tu, Ross Boczar, Max Simchowitz, Mahdi Soltanolkotabi, and Benjamin Recht. Low-rank
solutions of linear matrix equations via Procrustes Flow. In ICML, 2016.
[23] Tuo Zhao, Zhaoran Wang, and Han Liu. Nonconvex low rank matrix factorization via inexact first order
oracle. https://www.princeton.edu/~zhaoran/papers/LRMF.pdf.
[24] Qinqing Zheng and John Lafferty. A convergent gradient descent algorithm for rank minimization and
semidefinite programming from random linear measurements. In NIPS. 2015.
9
| 6484 |@word torsten:1 version:2 polynomial:3 compression:2 norm:6 stronger:1 d2:11 vldb:2 decomposition:4 accounting:1 incurs:1 initial:1 liu:1 contains:3 ours:1 pprox:4 past:2 ka:1 written:1 must:1 john:1 additive:2 numerical:1 enables:2 update:1 v:1 greedy:1 instantiate:5 item:1 propack:7 cormode:1 volkan:2 iterates:1 boosting:3 nitin:1 mathematical:1 direct:2 become:2 blumensath:2 combine:4 introduce:6 theoretically:1 cand:1 growing:1 considering:4 increasing:3 becomes:2 spain:1 begin:1 moreover:9 underlying:1 notation:2 estimating:1 project:1 factorized:3 bhojanapalli:1 surfing:1 prateek:1 compressive:8 finding:1 impractical:1 guarantee:20 pseudo:1 safely:1 every:1 multidimensional:1 nutshell:3 exactly:2 k2:2 uk:5 partitioning:1 omit:1 before:2 negligible:1 understood:1 svt:1 aat:1 limit:2 consequence:1 subscript:1 incoherence:1 black:1 might:2 logo:2 initialization:3 studied:6 challenging:1 fastest:3 limited:1 mentioning:1 factorization:1 range:1 bi:3 fazel:1 practical:1 union:15 block:8 procedure:1 area:3 empirical:6 universal:1 significantly:4 t10:2 projection:74 davy:1 get:1 onto:4 close:6 operator:4 romberg:1 context:4 applying:2 seminal:2 gilbert:2 equivalent:1 projector:1 hegde:3 lagrangian:1 maximizing:2 deterministic:1 straightforward:1 attention:3 www:1 convex:3 rectangular:3 survey:1 shen:1 simplicity:4 recovery:60 constrast:1 needell:1 musco:2 rmunk:1 utilizing:1 nuclear:2 orthonormal:1 poosala:1 notion:5 coordinate:2 justification:2 controlling:1 target:1 rip:16 exact:21 strengthen:1 massive:1 us:1 designing:2 boczar:1 programming:1 logarithmically:1 trend:1 expensive:1 viswanath:1 database:3 mike:1 preprint:3 solved:1 capture:1 enters:2 svds:16 wang:1 ensures:1 sun:1 trade:3 mentioned:4 benjamin:2 convexity:1 complexity:35 dynamic:3 solving:1 efficiency:3 uh:10 represented:1 various:2 tx:2 muthukrishnan:3 distinct:1 fast:5 jain:1 query:1 abridged:1 aggregate:1 heuristic:2 supplementary:1 larger:3 widely:1 say:1 stanford:1 koudas:1 statistic:2 think:1 noisy:2 indyk:4 eigenvalue:1 cai:1 simchowitz:1 propose:1 maryam:1 d21:2 relevant:2 combining:2 aligned:1 tu:1 ludwig:2 achieve:8 icdt:1 sudipto:2 frobenius:1 exploiting:2 convergence:6 regularity:5 requirement:1 wider:1 depending:1 completion:8 received:1 progress:1 zit:1 strong:1 recovering:8 come:2 raja:1 correct:1 kb:5 material:1 explains:1 require:5 suffices:1 generalization:4 preliminary:1 minos:1 extension:1 clarify:2 hold:4 marco:1 sufficiently:9 algorithmic:8 suel:1 achieves:6 dictionary:1 estimation:7 ross:1 largest:1 vice:1 establishes:1 tool:1 minimization:3 mit:4 avoid:1 varying:1 encode:1 focus:2 improvement:2 notational:1 rank:58 indicates:1 mainly:1 contrast:1 rigorous:5 baseline:1 sense:1 duarte:1 streaming:2 typically:2 entire:1 subroutine:1 provably:1 issue:1 overall:9 among:1 constrained:1 special:1 iters:3 uc:2 aware:1 construct:1 piotr:4 sampling:4 zz:2 icml:1 qinqing:1 minimized:1 sanghavi:1 np:1 piecewise:1 simplify:4 employ:1 few:1 modern:1 richard:1 subsampled:3 phase:2 consisting:1 interest:1 highly:1 investigate:1 multiply:1 zheng:1 analyzed:1 semidefinite:2 accurate:3 ambient:3 necessary:3 orthogonal:7 tree:3 incomplete:1 desired:1 theoretical:5 cevher:2 instance:4 column:1 soft:1 earlier:1 cover:1 ar:2 lanczos:6 cost:1 entry:1 guha:2 too:1 reported:1 proximal:2 combined:2 referring:1 recht:2 density:2 fundamental:1 randomized:3 siam:3 international:1 off:3 quickly:1 concrete:1 sketching:1 again:3 opposed:1 choose:1 davenport:1 denoises:1 zhao:1 return:3 parrilo:1 sec:1 zhaoran:2 includes:2 coefficient:2 satisfy:2 chinmay:3 depends:3 stream:1 later:1 competitive:1 recover:3 complicated:2 parallel:1 defer:2 contribution:2 square:2 accuracy:1 became:1 kek:4 yield:3 thaper:1 worth:1 history:1 iht:16 definition:9 inexact:1 failure:1 involved:1 larsen:1 associated:1 conciseness:1 recovers:1 proof:7 treatment:1 ut:7 dimensionality:1 improves:1 improved:1 synopsis:1 formulation:1 box:1 furthermore:1 sketch:4 christopher:1 believe:2 name:1 concept:1 counterpart:2 hence:7 symmetric:1 dhillon:1 pdf:1 outline:1 demonstrate:1 image:2 wise:1 harmonic:2 recently:2 common:3 empirically:2 overview:3 volume:2 extend:2 tail:19 measurement:5 refer:3 versa:1 meka:1 rd:13 sujay:1 similarly:1 pointed:1 soltanolkotabi:1 han:1 pu:10 base:1 closest:1 isometry:1 recent:5 belongs:1 nonconvex:1 success:1 arbitrarily:1 seen:4 minimum:1 somewhat:1 zuowei:1 employed:1 signal:4 ii:2 stephen:2 multiple:1 sound:1 nonzeros:1 anastasios:2 technical:1 faster:10 match:1 offer:4 cameron:1 controlled:1 instantiating:1 prediction:1 regression:4 variant:3 maintenance:1 essentially:3 noiseless:1 arxiv:4 histogram:35 iteration:11 fine:1 singular:14 jian:1 crucial:1 saad:1 rest:1 tend:1 flow:1 lafferty:1 garofalakis:1 structural:2 near:1 leverage:1 presence:1 easy:3 enough:1 iterate:2 affect:3 zi:2 identified:1 suboptimal:1 reduce:1 idea:1 kyrillidis:2 bottleneck:4 pca:1 becker:1 peter:1 returned:2 remark:4 useful:3 detailed:1 tune:1 procrustes:1 amount:1 extensively:1 ph:1 generate:1 http:2 oversampling:3 dropping:1 affected:2 group:2 key:1 demonstrating:2 d3:1 prevent:1 utilize:2 rectangle:3 asymptotically:1 relaxation:3 fraction:1 sum:5 year:1 convert:1 run:5 angle:1 baraniuk:1 almost:1 reader:2 appendix:16 bicriterion:2 comparable:1 graham:1 bound:7 ct:12 guaranteed:2 convergent:1 oracle:3 constraint:8 precisely:1 ri:4 fourier:3 min:1 sampta:1 martin:3 conjecture:1 structured:13 combination:5 instantiates:1 smaller:2 increasingly:1 reconstructing:1 kbk:1 intuitively:1 restricted:2 taken:1 computationally:1 equation:4 count:1 needed:2 letting:1 raghu:1 informal:1 generalizes:2 apply:7 svp:14 observe:1 schmidt:2 thomas:2 denotes:1 running:16 clustering:1 include:1 subsampling:1 top:1 log2:3 sigmod:1 emmanuel:1 establish:1 feng:1 implied:1 question:3 already:7 quantity:2 dependence:1 gradient:13 dp:3 subspace:60 chris:1 haas:1 reason:2 provable:5 length:1 code:1 rasmus:1 ratio:11 minimizing:2 setup:1 potentially:1 statement:1 stoc:1 stated:4 kpu:4 design:9 yousef:1 unknown:1 perform:1 observation:14 descent:9 defining:1 extended:1 head:23 precise:1 rn:5 arbitrary:10 community:1 tuo:1 bk:9 complement:2 pablo:1 required:1 giryes:1 connection:1 z1:1 nick:1 tremendous:1 barcelona:1 boost:1 nip:4 address:5 justin:1 suggested:1 krylov:10 usually:2 deanna:1 beyond:1 regime:2 sparsity:10 summarize:1 appeared:1 program:1 t20:2 max:1 memory:1 wainwright:1 suitable:1 natural:4 haar:2 solvable:1 kek2:1 residual:3 zr:1 representing:1 improve:1 imply:1 axis:1 concludes:1 prior:9 kotidis:2 review:1 kf:1 multiplication:2 relative:1 asymptotic:1 loss:2 ioannidis:1 foundation:1 iowa:1 sufficient:1 vectorized:1 thresholding:5 endowment:1 summary:1 infeasible:1 formal:2 allow:2 weaker:2 wide:2 barrier:1 sparse:10 overcome:1 dimension:5 plain:1 world:1 transition:2 yudong:1 projected:6 log3:1 transaction:3 approximate:62 ignore:1 active:1 iterative:5 decade:1 nature:1 anna:2 main:3 noise:1 allowed:2 complementary:1 sub:1 fails:1 jermaine:1 invocation:1 mahdi:1 wavelet:9 srinadh:1 yannis:3 theorem:22 specific:3 sensing:9 dominates:2 incorporating:2 adding:1 strauss:2 logk:3 kx:1 gap:8 chen:1 rd1:6 generalizing:1 led:1 inderjit:1 ch:14 corresponds:2 satisfies:5 goal:3 presentation:3 hard:4 fista:1 experimentally:1 infinite:1 decouple:1 principal:1 lemma:1 total:1 pas:1 experimental:1 svd:21 formally:6 support:3 mark:1 incorporate:1 princeton:1 d1:13 |
6,063 | 6,485 | Structured Prediction Theory Based on
Factor Graph Complexity
Corinna Cortes
Google Research
New York, NY 10011
Vitaly Kuznetsov
Google Research
New York, NY 10011
corinna@google.com
vitaly@cims.nyu.edu
Mehryar Mohrii
Courant Institute and Google
New York, NY 10012
Scott Yang
Courant Institute
New York, NY 10012
mohri@cims.nyu.edu
yangs@cims.nyu.edu
Abstract
We present a general theoretical analysis of structured prediction with a series
of new results. We give new data-dependent margin guarantees for structured
prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition. These are the tightest margin
bounds known for both standard multi-class and general structured prediction
problems. Our guarantees are expressed in terms of a data-dependent complexity
measure, factor graph complexity, which we show can be estimated from data and
bounded in terms of familiar quantities for several commonly used hypothesis sets
along with a sparsity measure for features and graphs. Our proof techniques include generalizations of Talagrand?s contraction lemma that can be of independent
interest.
We further extend our theory by leveraging the principle of Voted Risk Minimization (VRM) and show that learning is possible even with complex factor graphs. We
present new learning bounds for this advanced setting, which we use to design two
new algorithms, Voted Conditional Random Field (VCRF) and Voted Structured
Boosting (StructBoost). These algorithms can make use of complex features and
factor graphs and yet benefit from favorable learning guarantees. We also report
the results of experiments with VCRF on several datasets to validate our theory.
1
Introduction
Structured prediction covers a broad family of important learning problems. These include key tasks
in natural language processing such as part-of-speech tagging, parsing, machine translation, and
named-entity recognition, important areas in computer vision such as image segmentation and object
recognition, and also crucial areas in speech processing such as pronunciation modeling and speech
recognition.
In all these problems, the output space admits some structure. This may be a sequence of tags as in
part-of-speech tagging, a parse tree as in context-free parsing, an acyclic graph as in dependency
parsing, or labels of image segments as in object detection. Another property common to these tasks
is that, in each case, the natural loss function admits a decomposition along the output substructures.
As an example, the loss function may be the Hamming loss as in part-of-speech tagging, or it may be
the edit-distance, which is widely used in natural language and speech processing.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The output structure and corresponding loss function make these problems significantly different
from the (unstructured) binary classification problems extensively studied in learning theory. In
recent years, a number of different algorithms have been designed for structured prediction, including
Conditional Random Field (CRF) [Lafferty et al., 2001], StructSVM [Tsochantaridis et al., 2005],
Maximum-Margin Markov Network (M3N) [Taskar et al., 2003], a kernel-regression algorithm
[Cortes et al., 2007], and search-based approaches such as [Daum? III et al., 2009, Doppa et al., 2014,
Lam et al., 2015, Chang et al., 2015, Ross et al., 2011]. More recently, deep learning techniques have
also been developed for tasks including part-of-speech tagging [Jurafsky and Martin, 2009, Vinyals
et al., 2015a], named-entity recognition [Nadeau and Sekine, 2007], machine translation [Zhang et al.,
2008], image segmentation [Lucchi et al., 2013], and image annotation [Vinyals et al., 2015b].
However, in contrast to the plethora of algorithms, there have been relatively few studies devoted
to the theoretical understanding of structured prediction [Bakir et al., 2007]. Existing learning
guarantees hold primarily for simple losses such as the Hamming loss [Taskar et al., 2003, Cortes
et al., 2014, Collins, 2001] and do not cover other natural losses such as the edit-distance. They also
typically only apply to specific factor graph models. The main exception is the work of McAllester
[2007], which provides PAC-Bayesian guarantees for arbitrary losses, though only in the special case
of randomized algorithms using linear (count-based) hypotheses.
This paper presents a general theoretical analysis of structured prediction with a series of new results.
We give new data-dependent margin guarantees for structured prediction for a broad family of loss
functions and a general family of hypotheses, with an arbitrary factor graph decomposition. These
are the tightest margin bounds known for both standard multi-class and general structured prediction
problems. For special cases studied in the past, our learning bounds match or improve upon the
previously best bounds (see Section 3.3). In particular, our bounds improve upon those of Taskar et al.
[2003]. Our guarantees are expressed in terms of a data-dependent complexity measure, factor graph
complexity, which we show can be estimated from data and bounded in terms of familiar quantities
for several commonly used hypothesis sets along with a sparsity measure for features and graphs.
We further extend our theory by leveraging the principle of Voted Risk Minimization (VRM) and
show that learning is possible even with complex factor graphs. We present new learning bounds for
this advanced setting, which we use to design two new algorithms, Voted Conditional Random Field
(VCRF) and Voted Structured Boosting (StructBoost). These algorithms can make use of complex
features and factor graphs and yet benefit from favorable learning guarantees. As a proof of concept
validating our theory, we also report the results of experiments with VCRF on several datasets.
The paper is organized as follows. In Section 2 we introduce the notation and definitions relevant to
our discussion of structured prediction. In Section 3, we derive a series of new learning guarantees
for structured prediction, which are then used to prove the VRM principle in Section 4. Section 5
develops the algorithmic framework which is directly based on our theory. In Section 6, we provide
some preliminary experimental results that serve as a proof of concept for our theory.
2
Preliminaries
Let X denote the input space and Y the output space. In structured prediction, the output space may
be a set of sequences, images, graphs, parse trees, lists, or some other (typically discrete) objects
admitting some possibly overlapping structure. Thus, we assume that the output structure can be
decomposed into l substructures. For example, this may be positions along a sequence, so that the
output space Y is decomposable along these substructures: Y = Y1 ? ? ? ? ? Yl . Here, Yk is the set
of possible labels (or classes) that can be assigned to substructure k.
Loss functions. We denote by L : Y ? Y ! R+ a loss function measuring the dissimilarity of
two elements of the output space Y. We will assume that the loss function L is definite, that is
L(y, y 0 ) = 0 iff y = y 0 . This assumption holds for all loss functions commonly used in structured
prediction. A key aspect of structured prediction is that the loss function can be decomposed along the
Pl
substructures Yk . As an example, L may be the Hamming loss defined by L(y, y 0 ) = 1l k=1 1yk 6=yk0
for all y = (y1 , . . . , yl ) and y 0 = (y10 , . . . , yl0 ), with yk , yk0 2 Yk . In the common case where Y is
a set of sequences defined over a finite alphabet, L may be the edit-distance, which is widely used
in natural language and speech processing applications, with possibly different costs associated to
insertions, deletions and substitutions. L may also be a loss based on the negative inner product of
the vectors of n-gram counts of two sequences, or its negative logarithm. Such losses have been
2
1
f1
f2
2
3
2
f2
1
f1
(a)
3
(b)
Figure 1: Example of factor graphs. (a) Pairwise Markov network decomposition: h(x, y) =
hf1 (x, y1 , y2 ) + hf2 (x, y2 , y3 ) (b) Other decomposition h(x, y) = hf1 (x, y1 , y3 ) + hf2 (x, y1 , y2 , y3 ).
used to approximate the BLEU score loss in machine translation. There are other losses defined
in computational biology based on various string-similarity measures. Our theoretical analysis is
general and applies to arbitrary bounded and definite loss functions.
Scoring functions and factor graphs. We will adopt the common approach in structured prediction
where predictions are based on a scoring function mapping X ? Y to R. Let H be a family of
scoring functions. For any h 2 H, we denote by h the predictor defined by h: for any x 2 X ,
h(x) = argmaxy2Y h(x, y).
Furthermore, we will assume, as is standard in structured prediction, that each function h 2 H can
be decomposed as a sum. We will consider the most general case for such decompositions, which
can be made explicit using the notion of factor graphs.1 A factor graph G is a tuple G = (V, F, E),
where V is a set of variable nodes, F a set of factor nodes, and E a set of undirected edges between
a variable node and a factor node. In our context, V can be identified with the set of substructure
indices, that is V = {1, . . . , l}.
For any factor node f , denote by N(f ) ? V the set of variable
nodes connected to f via an edge and
Q
define Yf as the substructure set cross-product Yf = k2N(f ) Yk . Then, h admits the following
decomposition as a sum of functions hf , each taking as argument an element of the input space
x 2 X and an element of Yf , yf 2 Yf :
X
h(x, y) =
hf (x, yf ).
(1)
f 2F
Figure 1 illustrates this definition with two different decompositions. More generally, we will consider
the setting in which a factor graph may depend on a particular example (xi , yi ): G(xi , yi ) = Gi =
([li ], Fi , Ei ). A special case of this setting is for example when the size li (or length) of each example
is allowed to vary and where the number of possible labels |Y| is potentially infinite.
We present other examples of such hypothesis sets and their decomposition in Section 3, where we
discuss our learning guarantees. Note that such hypothesis sets H with an additive decomposition are
those commonly used in most structured prediction algorithms [Tsochantaridis et al., 2005, Taskar
et al., 2003, Lafferty et al., 2001]. This is largely motivated by the computational requirement for
efficient training and inference. Our results, while very general, further provide a statistical learning
motivation for such decompositions.
Learning scenario. We consider the familiar supervised learning scenario where the training and
test points are drawn i.i.d. according to some distribution D over X ? Y. We will further adopt the
standard definitions of margin, generalization error and empirical error. The margin ?h (x, y) of a
hypothesis h for a labeled example (x, y) 2 X ? Y is defined by
?h (x, y) = h(x, y)
max
h(x, y 0 ).
0
(2)
y 6=y
Let S = ((x1 , y1 ), . . . , (xm , ym )) be a training sample of size m drawn from Dm . We denote by
bS (h) the empirical error of h over S:
R(h) the generalization error and by R
R(h) =
E
(x,y)?D
[L(h(x), y)]
and
bS (h) =
R
E
(x,y)?S
[L(h(x), y)],
(3)
1
Factor graphs are typically used to indicate the factorization of a probabilistic model. We are not assuming
probabilistic models, but they would be also captured by our general framework: h would then be - log of a
probability.
3
where h(x) = argmaxy h(x, y) and where the notation (x, y) ? S indicates that (x, y) is drawn
according to the empirical distribution defined by S. The learning problem consists of using the
sample S to select a hypothesis h 2 H with small expected loss R(h).
Observe that the definiteness of the loss function implies, for all x 2 X , the following equality:
L(h(x), y) = L(h(x), y) 1?h (x,y)?0 .
(4)
We will later use this identity in the derivation of surrogate loss functions.
3
General learning bounds for structured prediction
In this section, we present new learning guarantees for structured prediction. Our analysis is general
and applies to the broad family of definite and bounded loss functions described in the previous
section. It is also general in the sense that it applies to general hypothesis sets and not just sub-families
of linear functions. For linear hypotheses, we will give a more refined analysis that holds for arbitrary
norm-p regularized hypothesis sets.
The theoretical analysis of structured prediction is more complex than for classification since, by
definition, it depends on the properties of the loss function and the factor graph. These attributes
capture the combinatorial properties of the problem which must be exploited since the total number
of labels is often exponential in the size of that graph. To tackle this problem, we first introduce a
new complexity tool.
3.1
Complexity measure
A key ingredient of our analysis is a new data-dependent notion of complexity that extends the
classical Rademacher complexity. We define the empirical factor graph Rademacher complexity
b G (H) of a hypothesis set H for a sample S = (x1 , . . . , xm ) and factor graph G as follows:
R
S
"
#
m X X p
X
1
G
b (H) =
R
E sup
|Fi | ?i,f,y hf (xi , y) ,
S
m ? h2H i=1
f 2Fi y2Yf
where ? = (?i,f,y )i2[m],f 2Fi ,y2Yf and where ?i,f,y s are independent Rademacher random variables
uniformly distributed over {?1}. The factor graph Rademacher complexity of H for a factor graph
? G
?
b
G is defined as the expectation: RG
m (H) = ES?D m RS (H) . It can be shown that the empirical
factor graph Rademacher complexity is concentrated around its mean (Lemma 8). The factor graph
Rademacher complexity is a natural extension of the standard Rademacher complexity to vectorvalued hypothesis sets (with one coordinate per factor in our case). For binary classification, the factor
graph and standard Rademacher complexities coincide. Otherwise, the factor graph complexity can be
upper bounded in terms of the standard one. As with the standard Rademacher complexity, the factor
graph Rademacher complexity of a hypothesis set can be estimated from data in many cases. In some
important cases, it also admits explicit upper bounds similar to those for the standard Rademacher
complexity but with an additional dependence on the factor graph quantities. We will prove this for
several families of functions which are commonly used in structured prediction (Theorem 2).
3.2
Generalization bounds
In this section, we present new margin bounds for structured prediction based on the factor graph
Rademacher complexity of H. Our results hold both for the additive and the multiplicative empirical
margin losses defined below:
? ?
?
?
?
add
?
0
0
1
b
RS,? (h) = E
max
L(y , y) ? h(x, y) h(x, y )
(5)
y 0 6=y
(x,y)?S
? ?
?
?
?
mult
?
0
0
1
bS,?
R
(h) = E
max
L(y
,
y)
1
[h(x,
y)
h(x,
y
)]
.
(6)
?
0
(x,y)?S
y 6=y
Here, ? (r) = min(M, max(0, r)) for all r, with M = maxy,y0 L(y, y 0 ). As we show in Section 5,
badd (h) and R
bmult (h) directly lead to many existing structured prediction
convex upper bounds on R
S,?
S,?
algorithms. The following is our general data-dependent margin bound for structured prediction.
4
Theorem 1. Fix ? > 0. For any > 0, with probability at least 1
of size m, the following holds for all h 2 H,
over the draw of a sample S
s
p
1
badd (h) + 4 2 RG (H) + M log ,
R(h) ? R?add (h) ? R
S,?
m
?
2m
s
p
log 1
4 2M G
mult
bS,?
R(h) ? R?mult (h) ? R
(h) +
Rm (H) + M
.
?
2m
The full proof of Theorem 1 is given in Appendix A. It is based on a new contraction lemma
(Lemma 5) generalizing Talagrand?s lemma that can be of independent interest.2 We also present a
more refined contraction lemma (Lemma 6) that can be used to improve the bounds of Theorem 1.
Theorem 1 is the first data-dependent generalization guarantee for structured prediction with general
loss functions, general hypothesis sets, and arbitrary factor graphs for both multiplicative and additive
margins. We also present a version of this result with empirical complexities as Theorem 7 in the
supplementary material. We will compare these guarantees to known special cases below.
The margin bounds above p
can be extended to hold uniformly over ? 2 (0, 1] at the price of an
additional term of the form (log log2 ?2 )/m in the bound, using known techniques (see for example
[Mohri et al., 2012]).
The hypothesis set used by convex structured prediction algorithms such as StructSVM [Tsochantaridis et al., 2005], Max-Margin Markov Networks (M3N) [Taskar et al., 2003] or Conditional
Random Field (CRF) [Lafferty et al., 2001] is that of linearPfunctions. More precisely, let be a
feature mapping from (X ? Y) to RN such that (x, y) = f 2F f (x, yf ). For any p, define Hp
as follows:
Hp = {x 7! w ?
(x, y) : w 2 RN , kwkp ? ?p }.
b G (Hp ) can be efficiently estimated using random sampling and solving LP programs.
Then, R
m
b G (Hp ). To simplify our presentation, we will
Moreover, one can obtain explicit upper bounds on R
m
consider the case p = 1, 2, but our results can be extended to arbitrary p 1 and, more generally, to
arbitrary group norms.
Theorem 2. For any sample S = (x1 , . . . , xm ), the following upper bounds hold for the empirical
factor graph complexity of H1 and H2 :
q
p
P
b G (H1 ) ? ?1 r1 s log(2N ),
b G (H2 ) ? ?2 r2 Pm P
R
R
S
S
i=1
f 2Fi
y2Yf |Fi |,
m
m
where r1 = maxi,f,y k f (x
, y)k
1 , r2 =
Pim
P
Pmaxi,f,y k
defined by s = maxj2[1,N ] i=1 f 2Fi y2Yf |Fi |1
f (xi , y)k2
f,j (xi ,y)6=0
.
and where s is a sparsity factor
Plugging in these factor graph complexity upper bounds into Theorem 1 immediately yields explicit
data-dependent structured prediction learning guarantees for linear hypotheses with general loss
functions and arbitrary factor graphs (see Corollary 10). Observe that, in the worst case, the sparsity
factor can be bounded as follows:
s?
m X X
X
i=1 f 2Fi y2Yf
|Fi | ?
m
X
i=1
|Fi |2 di ? m max |Fi |2 di ,
i
where di = max
pf 2Fi |Yf |. Thus, the factor graph Rademacher complexities of linear hypotheses in
H1 scale as O( log(N ) maxi |Fi |2 di /m). An important observation is that |Fi | and di depend on
the observed sample. This shows that the expected size of the factor graph is crucial for learning in
this scenario. This should be contrasted with other existing structured prediction guarantees that we
discuss below, which assume a fixed upper bound on the size of the factor graph. Note that our result
shows that learning is possible even with an infinite set Y. To the best of our knowledge, this is the
first learning guarantee for learning with infinitely many classes.
2
A result similar to Lemma 5 has also been recently proven independently in [Maurer, 2016].
5
Our learning guarantee for H1 can additionally benefit from the sparsity of the feature mapping
and observed data. In particular, in many applications, f,j is a binary indicator function that is
non-zero for a single (x, y) 2 X ? Yf . For instance, in NLP, P
f,j may indicate an occurrence of a
m
2
2
certain n-gram in the input xi and output
y
.
In
this
case,
s
=
i=1 |Fi | ? m maxi |Fi | and the
p i
complexity term is only in O(maxi |Fi | log(N )/m), where N may depend linearly on di .
3.3
Special cases and comparisons
Markov networks. For the pairwise Markov networks with a fixed number of substructures l studied
by Taskar et al. [2003], our equivalent factor graph admits l nodes, |Fi | = l, and the maximum size
of Yf is di = k 2 if each substructure of a pair can be assigned one of k classes. Thus, if we apply
Corollary 10 with Hamming distance as our loss function and divide the bound through by l, to
normalize the loss to interval [0, 1] as in [Taskar et al., 2003], we obtain the following explicit form
of our guarantee for an additive empirical margin loss, for all h 2 H2 :
s
r
2
log 1
4?
r
2k
2 2
add
bS,?
R(h) ? R
(h) +
+3
.
?
m
2m
This bound can be further improved by eliminating the dependency on k using an extension of our
contraction Lemma 5 to k ? k1,2 (see p
Lemma 6). The complexity term of Taskar et al. [2003] is
e
bounded by a quantity that varies as O( ?22 q 2 r22 /m), where q is the maximal out-degree of a factor
graph. Our bound has the same dependence on these key quantities, but with no logarithmic term
in our case. Note that, unlike the result of Taskar et al. [2003], our bound also holds for general
loss functions and different p-norm regularizers. Moreover, our result for a multiplicative empirical
margin loss is new, even in this special case.
Multi-class classification. For standard (unstructured) multi-class classification, we have |Fi | = 1
and di = c, where c is the number of classes. In that case, for linear
hypotheses with norm-2
p
2
regularization, the complexity term of our bound varies as O(?2 r2 c/? m) (Corollary 11). This
improves upon the best known general margin bounds of Kuznetsov et al. [2014], who provide a
guarantee that scales linearly with the number of classes instead. Moreover, in the special case where
an individual wy is learned for each class y 2 [c], we retrieve the recent favorable bounds given by Lei
et al. [2015], albeit with a somewhat simpler formulation. In that case, for any (x, y), all components
of the feature vector (x, y) are zero, except (perhaps) for the N components corresponding to
class y, where N is the dimension of wy . In view of that, for p
example for a group-norm k ? k2,1 regularization, the complexity term of our bound varies as O(?r (log c)/?2 m), which matches the
results of Lei et al. [2015] with a logarithmic dependency on c (ignoring some complex exponents of
log c in their case). Additionally, note that unlike existing multi-class learning guarantees, our results
hold for arbitrary loss functions. See Corollary 12 for further details. Our sparsity-based bounds
can also be used to give bounds with logarithmic dependence on the number of classes when the
features only take values in {0, 1}. Finally, using Lemma 6 instead of Lemma 5, the dependency on
the number of classes can be further improved.
We conclude this section by observing that, since our guarantees are expressed in terms of the average
size of the factor graph over a given sample, this invites us to search for a hypothesis set H and
predictor h 2 H such that the tradeoff between the empirical size of the factor graph and empirical
error is optimal. In the next section, we will make use of the recently developed principle of Voted
Risk Minimization (VRM) [Cortes et al., 2015] to reach this objective.
4
Voted Risk Minimization
In many structured prediction applications such as natural language processing and computer vision,
one may wish to exploit very rich features. However, the use of rich families of hypotheses could lead
to overfitting. In this section, we show that it may be possible to use rich families in conjunction with
simpler families, provided that fewer complex hypotheses are used (or that they are used with less
mixture weight). We achieve this goal by deriving learning guarantees for ensembles of structured
prediction rules that explicitly account for the differing complexities between families. This will
motivate the algorithms that we present in Section 5.
6
Assume that we are given p families H1 , . . . , Hp of functions mapping from X ? Y to R. Define the
PT
ensemble family F = conv([pk=1 Hk ), that is the family of functions f of the form f = t=1 ?t ht ,
where ? = (?1 , . . . , ?T ) is in the simplex and where, for each t 2 [1, T ], ht is in Hkt for some
G
G
kt 2 [1, p]. We further assume that RG
m (H1 ) ? Rm (H2 ) ? . . . ? Rm (Hp ). As an example, the
Hk s may be ordered by the size of the corresponding factor graphs.
The main result of this section is a generalization of the VRM theory to the structured prediction
badd (h) and
setting. The learning guarantees that we present are in terms of upper bounds on R
S,?
bmult (h), which are defined as follows for all ? 0:
R
S,?
?
? ?
?
?
add
?
0
0
1
bS,?,?
R
(h) = E
max
L(y
,
y)
+
?
h(x,
y)
h(x,
y
)
(7)
?
y 0 6=y
(x,y)?S
? ?
?
??
mult
?
0
0
1
bS,?,?
R
(h) = E
max
L(y
,
y)
1
+
?
[h(x,
y)
h(x,
y
)]
.
(8)
?
0
(x,y)?S
y 6=y
Here, ? can be interpreted as a margin term that acts in conjunction with ?. For simplicity, we assume
in this section that |Y| = c < +1.
Theorem 3. Fix ? > 0. For any > 0, with probability at least 1
over the draw of a sample S
of size m, each of the following inequalities holds for all f 2 F:
p T
4 2X
add
b
R(f ) RS,?,1 (f ) ?
? t RG
m (Hkt ) + C(?, M, c, m, p),
? t=1
p
T
4 2M X
mult
b
R(f ) RS,?,1 (f ) ?
? t RG
m (Hkt ) + C(?, M, c, m, p),
?
t=1
rl
q
m
log 2
log p
c2 ?2 m
log p
2M
4
where C(?, M, c, m, p) = ?
m + 3M
?2 log 4 log p
m + 2m .
The proof of this theorem crucially depends on the theory we developed in Section 3 and is given in
Appendix A. As with Theorem 1, we also present a version of this result with empirical complexities
as Theorem 14 in the supplementary material. The explicit dependence of this bound on the parameter
vector ? suggests that learning even with highly complex hypothesis sets could be possible so long
as the complexity term, which is a weighted average of the factor graph complexities, is not too
large. The theorem provides a quantitative way of determining the mixture weights that should be
apportioned to each family. Furthermore, the dependency on the number of distinct feature map
families Hk is very mild and therefore suggests that a large number of families can be used. These
properties will be useful for motivating new algorithms for structured prediction.
5
Algorithms
In this section, we derive several algorithms for structured prediction based on the VRM principle
discussed in Section 4. We first give general convex upper bounds (Section 5.1) on the structured
prediction loss which recover as special cases the loss functions used in StructSVM [Tsochantaridis
et al., 2005], Max-Margin Markov Networks (M3N) [Taskar et al., 2003], and Conditional Random
Field (CRF) [Lafferty et al., 2001]. Next, we introduce a new algorithm, Voted Conditional Random
Field (VCRF) Section 5.2, with accompanying experiments as proof of concept. We also present
another algorithm, Voted StructBoost (VStructBoost), in Appendix C.
5.1
General framework for convex surrogate losses
Given (x, y) 2 X ? Y, the mapping h 7! L(h(x), y) is typically not a convex function of h, which
leads to computationally hard optimization problems. This motivates the use of convex surrogate
losses. We first introduce a general formulation of surrogate losses for structured prediction problems.
Lemma 4. For any u 2 R+ , let u : R ! R be an upper bound on v 7! u1v?0 . Then, the following
upper bound holds for any h 2 H and (x, y) 2 X ? Y,
L(h(x), y) ? max
h(x, y 0 )).
(9)
L(y 0 ,y) (h(x, y)
0
y 6=y
7
The proof is given in Appendix A. This result defines a general framework that enables us to
straightforwardly recover many of the most common state-of-the-art structured prediction algorithms
via suitable choices of u (v): (a) for u (v) = max(0, u(1 v)), the right-hand side of (9) coincides
with the surrogate loss defining StructSVM [Tsochantaridis et al., 2005]; (b) for u (v) = max(0, u
v), it coincides with the surrogate loss defining Max-Margin Markov Networks (M3N) [Taskar et al.,
2003] when using for L the Hamming loss; and (c) for u (v) = log(1 + eu v ), it coincides with the
surrogate loss defining the Conditional Random Field (CRF) [Lafferty et al., 2001].
Moreover, alternative choices of u (v) can help define new algorithms. In particular, we will refer to
the algorithm based on the surrogate loss defined by u (v) = ue v as StructBoost, in reference to the
exponential loss used in AdaBoost. Another related alternative is based on the choice u (v) = eu v .
See Appendix C, for further details on this algorithm. In fact, for each u (v) described above, the
corresponding convex surrogate is an upper bound on either the multiplicative or additive margin
loss introduced in Section 3. Therefore, each of these algorithms seeks a hypothesis that minimizes
the generalization bounds presented in Section 3. To the best of our knowledge, this interpretation
of these well-known structured prediction algorithms is also new. In what follows, we derive new
structured prediction algorithms that minimize finer generalization bounds presented in Section 4.
5.2
Voted Conditional Random Field (VCRF)
We first consider the convex surrogate loss based on u (v) = log(1 + eu v ), which corresponds
to the loss defining CRF models. Using the monotonicity of the logarithm and upper bounding the
maximum by a sum gives the following upper bound on the surrogate loss holds:
?X
?
L(y,y 0 ) w?( (x,y)
(x,y 0 ))
L(y,y 0 ) w?( (x,y)
(x,y 0 ))
max
log(1
+
e
)
?
log
e
,
0
y 6=y
y 0 2Y
which, combined with VRM principle leads to the following optimization problem:
?X
? X
p
m
1 X
min
log
eL(y,yi ) w?( (xi ,yi ) (xi ,y)) +
( rk + )kwk k1 ,
w m
i=1
y2Y
(10)
k=1
p
where rk = r1 |F (k)| log N . We refer to the learning algorithm based on the optimization
problem (10) as VCRF. Note that for = 0, (10) coincides with the objective function of L1 0
0
regularized CRF. Observe that we can also directly use maxy0 6=y log(1 + eL(y,y ) w? (x,y,y ) ) or its
P
0
0
upper bound y0 6=y log(1 + eL(y,y ) w? (x,y,y ) ) as a convex surrogate. We can similarly derive
an L2 -regularization formulation of the VCRF algorithm. In Appendix D, we describe efficient
algorithms for solving the VCRF and VStructBoost optimization problems.
6
Experiments
In Appendix B, we corroborate our theory by reporting experimental results suggesting that the
VCRF algorithm can outperform the CRF algorithm on a number of part-of-speech (POS) datasets.
7
Conclusion
We presented a general theoretical analysis of structured prediction. Our data-dependent margin
guarantees for structured prediction can be used to guide the design of new algorithms or to derive
guarantees for existing ones. Its explicit dependency on the properties of the factor graph and on
feature sparsity can help shed new light on the role played by the graph and features in generalization.
Our extension of the VRM theory to structured prediction provides a new analysis of generalization
when using a very rich set of features, which is common in applications such as natural language
processing and leads to new algorithms, VCRF and VStructBoost. Our experimental results for
VCRF serve as a proof of concept and motivate more extensive empirical studies of these algorithms.
Acknowledgments
This work was partly funded by NSF CCF-1535987 and IIS-1618662 and NSF GRFP DGE-1342536.
8
References
G. H. Bakir, T. Hofmann, B. Sch?lkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan. Predicting Structured
Data (Neural Information Processing). The MIT Press, 2007.
K. Chang, A. Krishnamurthy, A. Agarwal, H. Daum? III, and J. Langford. Learning to search better than your
teacher. In ICML, 2015.
M. Collins. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods.
In Proceedings of IWPT, 2001.
C. Cortes, M. Mohri, and J. Weston. A General Regression Framework for Learning String-to-String Mappings.
In Predicting Structured Data. MIT Press, 2007.
C. Cortes, V. Kuznetsov, and M. Mohri. Ensemble methods for structured prediction. In ICML, 2014.
C. Cortes, P. Goyal, V. Kuznetsov, and M. Mohri. Kernel extraction via voted risk minimization. JMLR, 2015.
H. Daum? III, J. Langford, and D. Marcu. Search-based structured prediction. Machine Learning, 75(3):
297?325, 2009.
J. R. Doppa, A. Fern, and P. Tadepalli. Structured prediction via output space search. JMLR, 15(1):1317?1350,
2014.
D. Jurafsky and J. H. Martin. Speech and Language Processing (2nd Edition). Prentice-Hall, Inc., 2009.
V. Kuznetsov, M. Mohri, and U. Syed. Multi-class deep boosting. In NIPS, 2014.
J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and
labeling sequence data. In ICML, 2001.
M. Lam, J. R. Doppa, S. Todorovic, and T. G. Dietterich. Hc-search for structured prediction in computer vision.
In CVPR, 2015.
Y. Lei, ?. D. Dogan, A. Binder, and M. Kloft. Multi-class svms: From tighter data-dependent generalization
bounds to novel algorithms. In NIPS, 2015.
A. Lucchi, L. Yunpeng, and P. Fua. Learning for structured prediction using approximate subgradient descent
with working sets. In CVPR, 2013.
A. Maurer. A vector-contraction inequality for rademacher complexities. In ALT, 2016.
D. McAllester. Generalization bounds and consistency for structured labeling. In Predicting Structured Data.
MIT Press, 2007.
M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012.
D. Nadeau and S. Sekine. A survey of named entity recognition and classification. Linguisticae Investigationes,
30(1):3?26, January 2007.
S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to no-regret
online learning. In AISTATS, 2011.
B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003.
I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 6:1453?1484, Dec. 2005.
O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. In NIPS,
2015a.
O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR,
2015b.
D. Zhang, L. Sun, and W. Li. A structured prediction approach for statistical machine translation. In IJCNLP,
2008.
9
| 6485 |@word mild:1 version:2 eliminating:1 norm:5 tadepalli:1 nd:1 r:4 crucially:1 seek:1 decomposition:11 contraction:5 reduction:1 substitution:1 series:3 score:1 past:1 existing:5 com:1 yet:2 must:1 parsing:4 additive:5 hofmann:2 enables:1 designed:1 fewer:1 mccallum:1 grfp:1 provides:3 boosting:3 m3n:4 node:7 simpler:2 zhang:2 along:6 c2:1 prove:2 consists:1 introduce:4 pairwise:2 tagging:4 expected:2 multi:7 y2y:1 decomposed:3 pf:1 conv:1 spain:1 provided:1 bounded:7 notation:2 moreover:4 what:1 interpreted:1 string:3 minimizes:1 developed:3 differing:1 guarantee:25 quantitative:1 y3:3 act:1 tackle:1 shed:1 rm:3 k2:2 segmenting:1 koo:1 studied:3 suggests:2 binder:1 jurafsky:2 factorization:1 acknowledgment:1 practice:1 goyal:1 definite:3 regret:1 area:2 empirical:14 significantly:1 mult:5 altun:1 tsochantaridis:6 prentice:1 risk:5 context:2 equivalent:1 map:1 independently:1 convex:9 survey:1 decomposable:1 unstructured:2 immediately:1 simplicity:1 rule:1 deriving:1 retrieve:1 notion:2 coordinate:1 krishnamurthy:1 badd:3 pt:1 caption:1 hypothesis:25 element:3 recognition:5 marcu:1 labeled:1 observed:2 taskar:13 role:1 capture:1 worst:1 connected:1 sun:1 apportioned:1 eu:3 vrm:8 yk:6 complexity:33 insertion:1 motivate:2 depend:3 solving:2 segment:1 serve:2 upon:3 f2:2 po:1 various:1 alphabet:1 derivation:1 distinct:1 describe:1 labeling:2 tell:1 refined:2 pronunciation:1 widely:2 supplementary:2 cvpr:3 otherwise:1 grammar:1 gi:1 online:1 sequence:6 lam:2 product:2 maximal:1 relevant:1 iff:1 achieve:1 validate:1 normalize:1 sutskever:1 requirement:1 plethora:1 rademacher:14 r1:3 hkt:3 object:3 help:2 derive:5 indicate:2 implies:1 attribute:1 mcallester:2 material:2 f1:2 generalization:12 fix:2 preliminary:2 tighter:1 extension:3 pl:1 ijcnlp:1 hold:12 accompanying:1 around:1 hall:1 algorithmic:1 mapping:6 vary:1 adopt:2 dogan:1 favorable:3 estimation:1 label:4 combinatorial:1 ross:2 edit:3 tool:1 weighted:1 minimization:5 mit:4 conjunction:2 corollary:4 joachim:1 indicates:1 hk:3 contrast:1 rostamizadeh:1 sense:1 talwalkar:1 inference:1 dependent:10 el:3 foreign:1 typically:4 koller:1 classification:6 exponent:1 art:1 special:8 field:9 extraction:1 sampling:1 biology:1 broad:3 icml:3 simplex:1 report:2 develops:1 simplify:1 few:1 primarily:1 pim:1 gordon:1 individual:1 familiar:3 detection:1 interest:2 yl0:1 highly:1 argmaxy:1 mixture:2 admitting:1 light:1 devoted:1 regularizers:1 kt:1 tuple:1 edge:2 tree:2 maurer:2 sekine:2 logarithm:2 divide:1 theoretical:6 instance:1 modeling:1 cover:2 corroborate:1 measuring:1 cost:1 predictor:2 too:1 motivating:1 straightforwardly:1 dependency:6 varies:3 teacher:1 combined:1 randomized:1 kloft:1 probabilistic:3 yl:2 h2h:1 ym:1 lucchi:2 possibly:2 iwpt:1 li:3 account:1 suggesting:1 inc:1 explicitly:1 depends:2 later:1 multiplicative:4 h1:6 view:1 observing:1 sup:1 kwk:1 hf:3 recover:2 annotation:1 substructure:9 minimize:1 voted:12 largely:1 efficiently:1 who:1 yield:1 ensemble:3 lkopf:1 bayesian:1 fern:1 finer:1 reach:1 definition:4 petrov:1 hf2:2 dm:1 proof:8 associated:1 di:8 hamming:5 knowledge:2 bakir:2 improves:1 segmentation:2 organized:1 courant:2 supervised:1 adaboost:1 improved:2 fua:1 formulation:3 though:1 furthermore:2 just:1 smola:1 langford:2 talagrand:2 hand:1 working:1 parse:2 ei:1 invite:1 overlapping:1 google:4 defines:1 yf:10 perhaps:1 lei:3 dge:1 dietterich:1 concept:4 y2:3 ccf:1 equality:1 assigned:2 regularization:3 i2:1 ue:1 coincides:4 crf:7 l1:1 image:6 novel:1 recently:3 fi:20 common:5 vectorvalued:1 rl:1 extend:2 discussed:1 interpretation:1 refer:2 consistency:1 pm:1 hp:6 similarly:1 language:7 funded:1 similarity:1 add:5 recent:2 scenario:3 certain:1 inequality:2 binary:3 yi:4 exploited:1 scoring:3 captured:1 guestrin:1 additional:2 somewhat:1 ii:1 full:1 match:2 cross:1 long:1 plugging:1 prediction:50 regression:2 vision:3 expectation:1 yk0:2 kernel:2 agarwal:1 dec:1 interval:1 crucial:2 sch:1 unlike:2 validating:1 undirected:1 vitaly:2 leveraging:2 lafferty:6 yang:2 iii:3 bengio:1 identified:1 inner:1 tradeoff:1 motivated:1 speech:10 york:4 todorovic:1 deep:2 generally:2 useful:1 k2n:1 extensively:1 concentrated:1 svms:1 outperform:1 nsf:2 estimated:4 per:1 r22:1 discrete:1 group:2 key:4 drawn:3 ht:2 y10:1 graph:48 subgradient:1 year:1 sum:3 named:3 extends:1 family:19 reporting:1 draw:2 appendix:7 bound:42 played:1 precisely:1 vishwanathan:1 your:1 kwkp:1 tag:1 toshev:1 aspect:1 argument:1 min:2 martin:2 relatively:1 structured:56 according:2 y0:2 lp:1 b:7 maxy:1 computationally:1 previously:1 discus:2 count:2 tightest:2 apply:2 observe:3 occurrence:1 alternative:2 corinna:2 include:2 nlp:1 log2:1 daum:3 exploit:1 k1:2 classical:1 objective:2 quantity:5 kaiser:1 dependence:4 bagnell:1 surrogate:12 distance:4 entity:3 maxj2:1 bleu:1 assuming:1 length:1 index:1 potentially:1 negative:2 design:3 motivates:1 upper:15 observation:1 datasets:3 markov:8 finite:1 yunpeng:1 descent:1 january:1 defining:4 extended:2 hinton:1 y1:6 rn:2 arbitrary:10 introduced:1 pair:1 extensive:1 learned:1 deletion:1 barcelona:1 nip:5 below:3 wy:2 scott:1 xm:3 sparsity:7 program:1 including:2 max:16 suitable:1 syed:1 natural:8 regularized:2 predicting:3 indicator:1 advanced:2 improve:3 cim:3 nadeau:2 understanding:1 l2:1 interdependent:1 determining:1 loss:50 acyclic:1 proven:1 ingredient:1 generator:1 h2:4 foundation:1 degree:1 principle:6 translation:4 mohri:7 free:2 side:1 guide:1 institute:2 wide:1 taking:1 benefit:3 distributed:1 dimension:1 gram:2 rich:4 commonly:5 made:1 coincide:1 erhan:1 approximate:2 maxy0:1 monotonicity:1 overfitting:1 conclude:1 xi:8 imitation:1 search:6 additionally:2 ignoring:1 mehryar:1 hc:1 complex:8 aistats:1 pk:1 main:2 linearly:2 motivation:1 bounding:1 edition:1 hf1:2 allowed:1 x1:3 definiteness:1 ny:4 sub:1 position:1 pereira:1 explicit:7 wish:1 exponential:2 jmlr:3 theorem:13 rk:2 specific:1 pac:1 maxi:4 nyu:3 list:1 cortes:7 admits:5 r2:3 alt:1 albeit:1 doppa:3 dissimilarity:1 illustrates:1 margin:23 rg:5 generalizing:1 logarithmic:3 infinitely:1 vinyals:4 expressed:3 ordered:1 chang:2 kuznetsov:5 applies:3 corresponds:1 weston:1 conditional:9 identity:1 presentation:1 goal:1 price:1 hard:1 infinite:2 except:1 uniformly:2 contrasted:1 lemma:13 total:1 partly:1 experimental:3 e:1 exception:1 select:1 collins:2 |
6,064 | 6,486 | Coresets for Scalable Bayesian Logistic Regression
Jonathan H. Huggins
Trevor Campbell
Tamara Broderick
Computer Science and Artificial Intelligence Laboratory, MIT
{jhuggins@, tdjc@, tbroderick@csail.}mit.edu
Abstract
The use of Bayesian methods in large-scale data settings is attractive because of
the rich hierarchical models, uncertainty quantification, and prior specification
they provide. Standard Bayesian inference algorithms are computationally expensive, however, making their direct application to large datasets difficult or infeasible. Recent work on scaling Bayesian inference has focused on modifying
the underlying algorithms to, for example, use only a random data subsample at
each iteration. We leverage the insight that data is often redundant to instead obtain a weighted subset of the data (called a coreset) that is much smaller than the
original dataset. We can then use this small coreset in any number of existing
posterior inference algorithms without modification. In this paper, we develop an
efficient coreset construction algorithm for Bayesian logistic regression models.
We provide theoretical guarantees on the size and approximation quality of the
coreset ? both for fixed, known datasets, and in expectation for a wide class of
data generative models. Crucially, the proposed approach also permits efficient
construction of the coreset in both streaming and parallel settings, with minimal
additional effort. We demonstrate the efficacy of our approach on a number of
synthetic and real-world datasets, and find that, in practice, the size of the coreset
is independent of the original dataset size. Furthermore, constructing the coreset
takes a negligible amount of time compared to that required to run MCMC on it.
1
Introduction
Large-scale datasets, comprising tens or hundreds of millions of observations, are becoming the
norm in scientific and commercial applications ranging from population genetics to advertising. At
such scales even simple operations, such as examining each data point a small number of times,
become burdensome; it is sometimes not possible to fit all data in the physical memory of a single machine. These constraints have, in the past, limited practitioners to relatively simple statistical modeling approaches. However, the rich hierarchical models, uncertainty quantification, and
prior specification provided by Bayesian methods have motivated substantial recent effort in making
Bayesian inference procedures, which are often computationally expensive, scale to the large-data
setting.
The standard approach to Bayesian inference for large-scale data is to modify a specific inference algorithm, such as MCMC or variational Bayes, to handle distributed or streaming processing of data.
Examples include subsampling and streaming methods for variational Bayes [6, 7, 16], subsampling
methods for MCMC [4, 18, 24], and distributed ?consensus? methods for MCMC [8, 19, 21, 22].
Existing methods, however, suffer from both practical and theoretical limitations. Stochastic variational inference [16] and subsampling MCMC methods use a new random subset of the data at each
iteration, which requires random access to the data and hence is infeasible for very large datasets
that do not fit into memory. Furthermore, in practice, subsampling MCMC methods have been found
to require examining a constant fraction of the data at each iteration, severely limiting the computational gains obtained [5, 23]. More scalable methods such as consensus MCMC [19, 21, 22]
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
and streaming variational Bayes [6, 7] lead to gains in computational efficiency, but lack rigorous
justification and provide no guarantees on the quality of inference.
An important insight in the large-scale setting is that much of the data is often redundant, though
there may also be a small set of data points that are distinctive. For example, in a large document
corpus, one news article about a hockey game may serve as an excellent representative of hundreds
or thousands of other similar pieces about hockey games. However, there may only be a few articles
about luge, so it is also important to include at least one article about luge. Similarly, one individual?s genetic information may serve as a strong representative of other individuals from the same
ancestral population admixture, though some individuals may be genetic outliers. We leverage data
redundancy to develop a scalable Bayesian inference framework that modifies the dataset instead of
the common practice of modifying the inference algorithm. Our method, which can be thought of as
a preprocessing step, constructs a coreset ? a small, weighted subset of the data that approximates
the full dataset [1, 9] ? that can be used in many standard inference procedures to provide posterior
approximations with guaranteed quality. The scalability of posterior inference with a coreset thus
simply depends on the coreset?s growth with the full dataset size. To the best of our knowledge,
coresets have not previously been used in a Bayesian setting.
The concept of coresets originated in computational geometry (e.g. [1]), but then became popular
in theoretical computer science as a way to efficiently solve clustering problems such as k-means
and PCA (see [9, 11] and references therein). Coreset research in the machine learning community
has focused on scalable clustering in the optimization setting [3, 17], with the exception of Feldman
et al. [10], who developed a coreset algorithm for Gaussian mixture models. Coreset-like ideas have
previously been explored for maximum likelihood-learning of logistic regression models, though
these methods either lack rigorous justification or have only asymptotic guarantees (see [15] and
references therein).
The job of the coreset in the Bayesian setting is to provide an approximation of the full data loglikelihood up to a multiplicative error uniformly over the parameter space. As this paper is the first
foray into applying coresets in Bayesian inference, we begin with a theoretical analysis of the quality
of the posterior distribution obtained from such an approximate log-likelihood. The remainder of the
paper develops the efficient construction of small coresets for Bayesian logistic regression, a useful
and widely-used model for the ubiquitous problem of binary classification. We develop a coreset construction algorithm, the output of which uniformly approximates the full data log-likelihood
over parameter values in a ball with a user-specified radius. The approximation guarantee holds for
a given dataset with high probability. We also obtain results showing that the boundedness of the
parameter space is necessary for the construction of a nontrivial coreset, as well as results characterizing the algorithm?s expected performance under a wide class of data-generating distributions.
Our proposed algorithm is applicable in both the streaming and distributed computation settings,
and the coreset can then be used by any inference algorithm which accesses the (gradient of the)
log-likelihood as a black box. Although our coreset algorithm is specifically for logistic regression,
our approach is broadly applicable to other Bayesian generative models.
Experiments on a variety of synthetic and real-world datasets validate our approach and demonstrate
robustness to the choice of algorithm hyperparameters. An empirical comparison to random subsampling shows that, in many cases, coreset-based posteriors are orders of magnitude better in terms of
maximum mean discrepancy, including on a challenging 100-dimensional real-world dataset. Crucially, our coreset construction algorithm adds negligible computational overhead to the inference
procedure. All proofs are deferred to the Supplementary Material.
2
Problem Setting
We begin with the general problem of Bayesian posterior inference. Let D = {(Xn , Yn )}N
n=1 be
a dataset, where Xn ? X is a vector of covariates and Yn ? Y is an observation. Let ?0 (?) be a
prior density on a parameter ? ? ? and let p(Yn | Xn , ?) be the likelihood of observation n given
the parameter ?. The Bayesian posterior is given by the density ?N (?), where
?N (?) :=
Z
N
X
exp(LN (?))?0 (?)
, LN (?) :=
ln p(Yn | Xn , ?), EN := exp(LN (?))?0 (?) d?.
EN
n=1
2
Algorithm 1 Construction of logistic regression coreset
Require: Data D, k-clustering Q, radius R > 0, tolerance ? > 0, failure rate ? ? (0, 1)
1: for n = 1, . . . , N do
. calculate sensitivity upper bounds using the k-clustering
N
2:
mn ? P
(?n)
?
?RkZ
?Z k
1+
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
k
i=1
(?n)
|Gi
|e
G,i
n 2
end for P
N
m
? N ? N1 n=1 mn
cm
?N
M ? ?2 [(D + 1) log m
? N + log(1/?)]
for n = 1, . . . , N do
n
pn ? Nmm
?N
end for
(K1 , . . . , KN ) ? Multi(M, (pn )N
n=1 )
for n = 1, . . . , N do
n
?n ? pK
nM
end for
? ? {(?n , Xn , Yn ) | ?n > 0}
D
?
return D
. coreset size; c is from proof of Theorem B.1
. importance weights of data
. sample data for coreset
. calculate coreset weights
. only keep data points with non-zero weights
? = {(?m , X
? m , Y?m )}M with M N such that the
Our aim is to construct a weighted dataset D
m=1
PM
?
?
?
weighted log-likelihood LN (?) = m=1 ?m ln p(Yn | Xm , ?) satisfies
|LN (?) ? L?N (?)| ? ?|LN (?)|,
?? ? ?.
(1)
? satisfies Eq. (1), it is called an ?-coreset of D, and the approximate posterior
If D
Z
exp(L?N (?))?0 (?)
,
with E?N = exp(L?N (?))?0 (?) d?,
?
?N (?) =
E?N
has a marginal likelihood E?N which approximates the true marginal likelihood EN , shown by Proposition 2.1. Thus, from a Bayesian perspective, the ?-coreset is a useful notion of approximation.
?
Proposition 2.1. Let L(?) and L(?)
be arbitrary non-positive log-likelihood functions that satisfy
?
|L(?) ? L(?)|
? ?|L(?)| for all ? ? ?. Then for any prior ?0 (?) such that the marginal likelihoods
Z
Z
?
E = exp(L(?))?0 (?) d?
and
E? = exp(L(?))?
0 (?) d?
? ? ?| ln E|.
are finite, the marginal likelihoods satisfy | ln E ? ln E|
3
3.1
Coresets for Logistic Regression
Coreset Construction
In logistic regression, the covariates are real feature vectors Xn ? RD , the observations are labels
Yn ? {?1, 1}, ? ? RD , and the likelihood is defined as
p(Yn | Xn , ?) = plogistic (Yn | Xn , ?) :=
1
.
1 + exp (?Yn Xn ? ?)
The analysis in this work allows any prior ?0 (?); common choices are the Gaussian, Cauchy [12],
and spike-and-slab [13]. For notational brevity, we define Zn := Yn Xn , and let ?(s) := ln(1 +
exp(?s)). Choosing the optimal -coreset is not computationally feasible, so we take a less direct
approach. We design our coreset construction algorithm and prove its correctness using a quantity
?n (?) called the sensitivity [9], which quantifies the redundancy of a particular data point n ?
the larger the sensitivity, the less redundant. In the setting of logistic regression, we have that the
sensitivity is
N ?(Zn ? ?)
?n (?) := sup PN
.
???
`=1 ?(Z` ? ?)
3
Intuitively, ?n (?) captures how much influence data point n has on the log-likelihood LN (?) when
varying the parameter ? ? ?, and thus data points with high sensitivity should be included in the
coreset. Evaluating ?n (?) exactly is not tractable, however, so an upper bound mn ? ?n (?) must
be used in its place. Thus, the key challenge is to efficiently compute a tight upper bound on the
sensitivity.
For the moment we will consider ? = BR for any R > 0, where BR := {? ? RD | k?k2 ? R};
We discuss the case of ? = RD shortly. Choosing the parameter space to be a Euclidean ball is
reasonable since data is usually preprocessed to have mean zero and variance 1 (or, for sparse data,
to be between -1 and 1), so each component of ? is typically in a range close to zero (e.g. between
-4 and 4) [12].
The idea behind our sensitivity upper bound construction is that we would expect data points that
are bunched together to be redundant while data points that are far from from other data have a large
effect on inferences. Clustering is an effective way to summarize data and detect outliers, so we
will use a k-clustering of the data D to construct the sensitivity bound. A k-clustering is given by k
cluster centers Q = {Q1 , . . . , Qk }. Let Gi := {Zn | i = arg minj kQj ?Zn k2 } be the set of vectors
(?n)
(?n)
:= Gi \ {Zn }. Define ZG,i
closest to center Qi and let Gi
to be a uniform random vector from
(?n)
(?n)
(?n)
?
:= E[Z
G
and let Z
] be its mean. The following lemma uses a k-clustering to establish
i
G,i
G,i
an efficiently computable upper bound on ?n (BR ):
Lemma 3.1. For any k-clustering Q,
N
?n (BR ) ? mn :=
1+
? (?n) ?Zn k2
(?n) ?RkZ
G,i
|e
i=1 |Gi
Pk
.
(2)
Furthermore, mn can be calculated in O(k) time.
The bound in Eq. (2) captures the intuition that if the data forms tight clusters (that is, each Zn is
close to one of the cluster centers), we expect each cluster to be well-represented by a small number
(?n)
(?n)
of typical data points. For example, if Zn ? Gi , kZ?G,i ?Zn k2 is small, and |Gi
| = ?(N ), then
?n (BR ) = O(1). We use the (normalized) sensitivity bounds obtained from Lemma 3.1 to form an
importance distribution (pn )N
n=1 from which to sample the coreset. If we sample Zn , then we assign
it weight ?n proportional to 1/pn . The size of the coreset depends on the mean sensitivity bound,
the desired error ?, and a quantity closely related to the VC dimension of ? 7? ?(? ? Z), which we
show is D + 1. Combining these pieces we obtain Algorithm 1, which constructs an ?-coreset with
high probability by Theorem 3.2.
Theorem 3.2. Fix ? > 0, ? ? (0, 1), and R > 0. Consider a dataset D with k-clustering Q. With
probability at least 1 ? ?, Algorithm 1 with inputs (D, Q, R, ?, ?) constructs an ?-coreset of D for
logistic regression with parameter space ? = BR . Furthermore, Algorithm 1 runs in O(N k) time.
Remark 3.3. The coreset algorithm is efficient with an O(N k) running time. However, the algorithm
requires a k-clustering, which must also be constructed. A high-quality clustering can be obtained
cheaply via k-means++ in O(N k) time [2], although a coreset algorithm could also be used.
Examining
Algorithm 1, we see that the coreset size M is of order m
? N log m
? N , where m
?N =
P
1
? N should satisfy m
? N = o?(N ),1
n mn . So for M to be smaller than N , at a minimum, m
N
and preferably m
? N = O(1). Indeed, for the coreset size to be small, it is critical that (a) ? is
chosen such that most of the sensitivities satisfy ?n (?) N (since N is the maximum possible
sensitivity), (b) each upper bound mn is close to ?n (?), and (c) ideally, that m
? N is bounded by
a constant. In Section 3.2, we address (a) by providing sensitivity lower bounds, thereby showing
that the constraint ? = BR is necessary for nontrivial sensitivities even for ?typical? (i.e. nonpathological) data. We then apply our lower bounds to address (b) and show that our bound in
Lemma 3.1 is nearly tight. In Section 3.3, we address (c) by establishing the expected performance
of the bound in Lemma 3.1 for a wide class of data-generating distributions.
1
Recall that the tilde notation suppresses logarithmic terms.
4
3.2
Sensitivity Lower Bounds
We now develop lower bounds on the sensitivity to demonstrate that essentially we must limit ourselves to bounded ?,2 thus making our choice of ? = BR a natural one, and to show that the
sensitivity upper bound from Lemma 3.1 is nearly tight.
We begin by showing that in both the worst case and the average case, for all n, ?n (RD ) = N , the
maximum possible sensitivity ? even when the Zn are arbitrarily close. Intuitively, the reason for
the worst-case behavior is that if there is a separating hyperplane between a data point Zn and the
remaining data points, and ? is in the direction of that hyperplane, then when k?k2 becomes very
large, Zn becomes arbitrarily more important than any other data point.
Theorem 3.4. For any D ? 3, N ? N and 0 < 0 < 1, there exists > 0 and unit vectors
Z1 , . . . , ZN ? RD such that for all pairs n, n0 , Zn ? Zn0 ? 1 ? 0 and for all R > 0 and n,
N
?
?n (BR ) ?
,
and hence
?n (RD ) = N.
1 + (N ? 1)e?R 0 /4
The proof of Theorem 3.4 is based on choosing N distinct unit vectors V1 , . . . , VN ? RD?1 and
setting = 1 ? maxn6=n0 Vn ? Vn0 > 0. But what is a ?typical? value for ? In the case of the vectors
being uniformly distributed on the unit sphere, we have the following scaling for as N increases:
Proposition 3.5. If V1 , . . . , VN are independent and uniformly distributed on the unit sphere SD :=
{v ? RD | kvk = 1} with D ? 2, then with high probability
1 ? max0 Vn ? Vn0 ? CD N ?4/(D?1) ,
n6=n
where CD is a constant depending only on D.
Furthermore, N can be exponential in D even with remaining very close to 1:
?
Proposition 3.6. For N = bexp((1 ? )2 D/4)/ 2 c, and V1 , . . . , VN i.i.d. such that Vni = ? ?1D
with probability 1/2, then with probability at least 1/2, 1 ? maxn6=n0 Vn ? Vn0 ? .
Propositions 3.5 and 3.6 demonstrate that the data vectors Zn found in Theorem 3.4 are, in two
different senses, ?typical? vectors and should not be thought of as worst-case data only occurring
in some ?negligible? or zero-measure set. These three results thus demonstrate that it is necessary
to restrict attention to bounded ?. We can also use Theorem 3.4 to show that our sensitivity upper
bound is nearly tight.
Corollary 3.7. For the data Z1 , . . . , ZN from Theorem 3.4,
N
N
?
?
? ?n (BR ) ?
.
1 + (N ? 1)e?R 0 /4
1 + (N ? 1)e?R 20
k-Clustering Sensitivity Bound Performance
3.3
While Lemma 3.1 and Corollary 3.7 provide an upper bound on the sensitivity given a fixed dataset,
we would also like to understand how the expected mean sensitivity increases with N . We might
expect it to be finite since the logistic regression likelihood model is parametric; the coreset would
thus be acting as a sort of approximate finite sufficient statistic. Proposition 3.8 characterizes the
expected performance of the upper bound from Lemma 3.1 under a wide class of generating distributions. This result demonstrates that, under reasonable conditions, the expected value of m
? N is
bounded for all N . As a concrete example, Corollary 3.9 specializes Proposition 3.8 to data with a
single shared Gaussian generating distribution.
indep
indep
Proposition 3.8. Let Xn ? N(?Ln , ?Ln ), where Ln ? Multi(?1 , ?2 , . . . ) is the mixture component responsible for generating Xn . For n = 1, . . . , N , let Yn ? {?1, 1} be conditionally independent given Xn and set Zn = Yn Xn . Select 0 < r < 1/2, and define ?i = max(?i ? N ?r , 0). The
clustering of the data implied by (Ln )N
n=1 results in the expected sensitivity bound
X
1?2r N ??
1
1
?
?
E [m
? N] ?
+
N e?2N
? P
,
P
?1
?1
?R Bi
N ?1 +
?i e?R Ai N ?i +Bi
i ?i e
i:?i >0
i
2
Certain pathological datasets allow us to use unbounded ?, but we do not assume we are given such data.
5
(a)
(b) B INARY 10
(c) W EBSPAM
Figure 1: (a) Percentage of time spent creating the coreset relative to the total inference time (including 10,000 iterations of MCMC). Except for very small coreset sizes, coreset construction is a
small fraction of the overall time. (b,c) The mean sensitivities for varying choices of R and k. When
R varies k = 6 and when k varies R = 3. The mean sensitivity increases exponentially in R, as
expected, but is robust to the choice of k.
P
where Ai := Tr [?i ] + 1 ? y?i2 ?Ti ?i , Bi := j ?j Tr [?j ] + y?j2 ?Ti ?i ? 2?
yi y?j ?Ti ?j + ?Tj ?j ,
and y?j = E [Y1 |L1 = j].
Corollary 3.9. In the setting of Proposition 3.8, if ?1 = 1 and all data is assigned
cluster,
? to a single
R Tr[?1 ]+(1??
y12 )?T
1 ?1 .
then there is a constant C such that for sufficiently large N , E [m
? N ] ? Ce
3.4
Streaming and Parallel Settings
Algorithm 1 is a batch algorithm, but it can easily be used in parallel and streaming computation
settings using standard methods from the coreset literature, which are based on the following two
observations (cf. [10, Section 3.2]):
? i is an ?-coreset for Di , i = 1, 2, then D
?1 ? D
? 2 is an ?-coreset for D1 ? D2 .
1. If D
? is an ?-coreset for D and D
? 0 is an ?0 -coreset for D,
? then D
? 0 is an ?00 -coreset for D,
2. If D
00 :=
0
where ?
(1 + ?)(1 + ? ) ? 1.
We can use these observations to merge coresets that were constructed either in parallel, or sequentially, in a binary tree. Coresets are computed for two data blocks, merged using observation 1,
then compressed further using observation 2. The next two data blocks have coresets computed
and merged/compressed in the same manner, then the coresets from blocks 1&2 and 3&4 can be
merged/compressed analogously. We continue in this way and organize the merge/compress operations into a binary tree. Then, if there are B data blocks total, only log B blocks ever need be
maintained simultaneously. In the streaming setting we would choose blocks of constant size, so
B = O(N ), while in the parallel setting B would be the number of machines available.
4
Experiments
We evaluated the performance of the logistic regression coreset algorithm on a number of synthetic
and real-world datasets. We used a maximum dataset size of 1 million examples because we wanted
to be able to calculate the true posterior, which would be infeasible for extremely large datasets.
indep
Synthetic Data. We generated synthetic binary data according to the model Xnd ? Bern(pd ), d =
indep
1, . . . , D and Yn ? plogistic (? | Xn , ?). The idea is to simulate data in which there are
a small number of rarely occurring but highly predictive features, which is a common realworld phenomenon. We thus took p = (1, .2, .3, .5, .01, .1, .2, .007, .005, .001) and ? =
(?3, 1.2, ?.5, .8, 3, ?1., ?.7, 4, 3.5, 4.5) for the D = 10 experiments (B INARY 10) and the first
5 components of p and ? for the D = 5 experiments (B INARY 5). The generative model is the same
one used by Scott et al. [21] and the first 5 components of p and ? correspond to those used in the
6
(a) B INARY 5
(b) B INARY 10
(c) M IXTURE
(d) C HEM R EACT
(e) W EBSPAM
(f) C OV T YPE
Figure 2: Polynomial MMD and negative test log-likelihood of random sampling and the logistic
regression coreset algorithm for synthetic and real data with varying subset sizes (lower is better
for all plots). For the synthetic data, N = 106 total data points were used and 103 additional data
points were generated for testing. For the real data, 2,500 (resp. 50,000 and 29,000) data points of
the C HEM R EACT (resp. W EBSPAM and C OV T YPE) dataset were held out for testing. One standard
deviation error bars were obtained by repeating each experiment 20 times.
Scott et al. experiments (given in [21, Table 1b]). We generated a synthetic mixture dataset with
i.i.d.
continuous covariates (M IXTURE) using a model similar to that of Han et al. [15]: Yn ? Bern(1/2)
indep
and Xn ? N(?Yn , I), where ??1 = (0, 0, 0, 0, 0, 1, 1, 1, 1, 1) and ?1 = (1, 1, 1, 1, 1, 0, 0, 0, 0, 0).
Real-world Data. The C HEM R EACT dataset consists of N = 26,733 chemicals, each with D = 100
properties. The goal is to predict whether each chemical is reactive. The W EBSPAM corpus consists
of N = 350,000 web pages, approximately 60% of which are spam. The covariates consist of the
D = 127 features that each appear in at least 25 documents. The cover type (C OV T YPE) dataset
consists of N = 581,012 cartographic observations with D = 54 features. The task is to predict the
type of trees that are present at each observation location.
4.1
Scaling Properties of the Coreset Construction Algorithm
Constructing Coresets. In order for coresets to be a worthwhile preprocessing step, it is critical
that the time required to construct the coreset is small relative to the time needed to complete the inference procedure. We implemented the logistic regression coreset algorithm in Python.3 In Fig. 1a,
we plot the relative time to construct the coreset for each type of dataset (k = 6) versus the total inference time, including 10,000 iterations of the MCMC procedure described in Section 4.2. Except
for very small coreset sizes, the time to run MCMC dominates.
3
More details on our implementation are provided in the Supplementary Material. Code to recreate all of
our experiments is available at https://bitbucket.org/jhhuggins/lrcoresets.
7
Sensitivity. An important question is how the mean sensitivity m
? N scales with N , as it determines
how the size of the coreset scales with the data. Furthermore, ensuring that mean sensitivity is
robust to the number of clusters k is critical since needing to adjust the algorithm hyperparameters
for each dataset could lead to an unacceptable increase in computational burden. We also seek to
understand how the radius R affects the mean sensitivity. Figs. 1b and 1c show the results of our
scaling experiments on the B INARY 10 and W EBSPAM data. The mean sensitivity is essentially
constant across a range of dataset sizes. For both datasets the mean sensitivity is robust to the choice
of k and scales exponentially in R, as we would expect from Lemma 3.1.
4.2
Posterior Approximation Quality
Since the ultimate goal is to use coresets for Bayesian inference, the key empirical question is how
well a posterior formed using a coreset approximates the true posterior distribution. We compared
the coreset algorithm to random subsampling of data points, since that is the approach used in
many existing scalable versions of variational inference and MCMC [4, 16]. Indeed, coreset-based
importance sampling could be used as a drop-in replacement for the random subsampling used by
these methods, though we leave the investigation of this idea for future work.
Experimental Setup. We used adaptive Metropolis-adjusted Langevin algorithm (MALA) [14, 20]
for posterior inference. For each dataset, we ran the coreset and random subsampling algorithms
20 times for each choice of subsample size M . We ran adaptive MALA for 100,000 iterations on
the full dataset and each subsampled dataset. The subsampled datasets were fixed for the entirety
of each run, in contrast to subsampling algorithms that resample the data at each iteration. For the
synthetic datasets, which are lower dimensional, we used k = 4 while for the real-world datasets,
which are higher dimensional, we used k = 6. We used a heuristic to choose R as large as was
feasible while still obtaining moderate total sensitivity bounds. For a clustering Q of data D,
? let
Pk P
I := N ?1 i=1 Z?Gi kZ ? Qi k2 be the normalized k-means score. We chose R = a/ I ,
(?n)
where a is a small constant. The idea is that, for i ? [k] and Zn ? Gi , we want RkZ?
?Zn k2 ? a
G,i
(?n)
on average, so the term exp{?RkZ?G,i ? Zn k2 } in Eq. (2) is not too small and hence ?n (BR ) is
not too large. Our experiments used a = 3. We obtained similar results for 4 ? k ? 8 and 2.5 ?
a ? 3.5, indicating that the logistic regression coreset algorithm has some robustness to the choice
of these hyperparameters. We used negative test log-likelihood and maximum mean discrepancy
(MMD) with a 3rd degree polynomial kernel as comparison metrics (so smaller is better).
Synthetic Data Results. Figures 2a-2c show the results for synthetic data. In terms of test loglikelihood, coresets did as well as or outperformed random subsampling. In terms of MMD, the
coreset posterior approximation typically outperformed random subsampling by 1-2 orders of magnitude and never did worse. These results suggest much can be gained by using coresets, with
comparable performance to random subsampling in the worst case.
Real-world Data Results. Figures 2d-2f show the results for real data. Using coresets led to better
performance on C HEM R EACT for small subset sizes. Because the dataset was fairly small and
random subsampling was done without replacement, coresets were worse for larger subset sizes.
Coreset and random subsampling performance was approximately the same for W EBSPAM. On
W EBSPAM and C OV T YPE, coresets either outperformed or did as well as random subsampling in
terms MMD and test log-likelihood on almost all subset sizes. The only exception was that random
subsampling was superior on W EBSPAM for the smallest subset set. We suspect this is due to the
variance introduced by the importance sampling procedure used to generate the coreset.
For both the synthetic and real-world data, in many cases we are able to obtain a high-quality logistic
regression posterior approximation using a coreset that is many orders of magnitude smaller than
the full dataset ? sometimes just a few hundred data points. Using such a small coreset represents
a substantial reduction in the memory and computational requirements of the Bayesian inference
algorithm that uses the coreset for posterior inference. We expect that the use of coresets could lead
similar gains for other Bayesian models. Designing coreset algorithms for other widely-used models
is an exciting direction for future research.
Acknowledgments
All authors are supported by the Office of Naval Research under ONR MURI grant N000141110688. JHH is
supported by a National Defense Science and Engineering Graduate (NDSEG) Fellowship.
8
References
[1] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Geometric approximation via coresets. Combinatorial and computational geometry, 52:1?30, 2005.
[2] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Symposium on Discrete
Algorithms, pages 1027?1035. Society for Industrial and Applied Mathematics, 2007.
[3] O. Bachem, M. Lucic, S. H. Hassani, and A. Krause. Approximate K-Means++ in Sublinear Time. In
AAAI Conference on Artificial Intelligence, 2016.
[4] R. Bardenet, A. Doucet, and C. C. Holmes. On Markov chain Monte Carlo methods for tall data.
arXiv.org, May 2015.
[5] M. J. Betancourt. The Fundamental Incompatibility of Hamiltonian Monte Carlo and Data Subsampling.
In International Conference on Machine Learning, 2015.
[6] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan. Streaming Variational Bayes. In
Advances in Neural Information Processing Systems, Dec. 2013.
[7] T. Campbell, J. Straub, J. W. Fisher, III, and J. P. How. Streaming, Distributed Variational Inference for
Bayesian Nonparametrics. In Advances in Neural Information Processing Systems, 2015.
[8] R. Entezari, R. V. Craiu, and J. S. Rosenthal. Likelihood Inflating Sampling Algorithm. arXiv.org, May
2016.
[9] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Symposium
on Theory of Computing. ACM Request Permissions, June 2011.
[10] D. Feldman, M. Faulkner, and A. Krause. Scalable training of mixture models via coresets. In Advances
in Neural Information Processing Systems, pages 2142?2150, 2011.
[11] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for kmeans, pca and projective clustering. In Symposium on Discrete Algorithms, pages 1434?1453. SIAM,
2013.
[12] A. Gelman, A. Jakulin, M. G. Pittau, and Y.-S. Su. A weakly informative default prior distribution for
logistic and other regression models. The Annals of Applied Statistics, 2(4):1360?1383, Dec. 2008.
[13] E. I. George and R. E. McCulloch. Variable selection via Gibbs sampling. Journal of the American
Statistical Association, 88(423):881?889, 1993.
[14] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, pages 223?242,
2001.
[15] L. Han, T. Yang, and T. Zhang. Local Uncertainty Sampling for Large-Scale Multi-Class Logistic Regression. arXiv.org, Apr. 2016.
[16] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. The Journal of
Machine Learning Research, 14:1303?1347, 2013.
[17] M. Lucic, O. Bachem, and A. Krause. Strong Coresets for Hard and Soft Bregman Clustering with
Applications to Exponential Family Mixtures. In International Conference on Artificial Intelligence and
Statistics, 2016.
[18] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with Subsets of Data. In Uncertainty
in Artificial Intelligence, Mar. 2014.
[19] M. Rabinovich, E. Angelino, and M. I. Jordan. Variational consensus Monte Carlo. arXiv.org, June 2015.
[20] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin distributions and their discrete
approximations. Bernoulli, 2(4):341?363, Nov. 1996.
[21] S. L. Scott, A. W. Blocker, F. V. Bonassi, H. A. Chipman, E. I. George, and R. E. McCulloch. Bayes and
big data: The consensus Monte Carlo algorithm. In Bayes 250, 2013.
[22] S. Srivastava, V. Cevher, Q. Tran-Dinh, and D. Dunson. WASP: Scalable Bayes via barycenters of subset
posteriors. In International Conference on Artificial Intelligence and Statistics, 2015.
[23] Y. W. Teh, A. H. Thiery, and S. Vollmer. Consistency and fluctuations for stochastic gradient Langevin
dynamics. Journal of Machine Learning Research, 17(7):1?33, Mar. 2016.
[24] M. Welling and Y. W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In International Conference on Machine Learning, 2011.
9
| 6486 |@word version:1 polynomial:2 norm:1 d2:1 seek:1 crucially:2 q1:1 thereby:1 tr:3 boundedness:1 reduction:1 moment:1 efficacy:1 score:1 genetic:2 document:2 past:1 existing:3 must:3 kqj:1 informative:1 wanted:1 seeding:1 plot:2 drop:1 n0:3 intelligence:5 generative:3 haario:1 hamiltonian:1 blei:1 location:1 org:5 zhang:1 unbounded:1 unacceptable:1 constructed:2 direct:2 become:1 symposium:3 prove:1 consists:3 overhead:1 firefly:1 manner:1 bitbucket:1 indeed:2 expected:7 behavior:1 multi:3 becomes:2 provided:2 spain:1 underlying:1 begin:3 bounded:4 notation:1 mcculloch:2 what:1 straub:1 cm:1 suppresses:1 developed:1 unified:1 inflating:1 guarantee:4 preferably:1 ti:3 growth:1 exactly:1 k2:8 demonstrates:1 unit:4 grant:1 yn:16 organize:1 appear:1 positive:1 negligible:3 engineering:1 vni:1 modify:1 sd:1 limit:1 severely:1 local:1 jakulin:1 establishing:1 fluctuation:1 becoming:1 merge:2 approximately:2 black:1 might:1 chose:1 therein:2 challenging:1 tamminen:1 limited:1 projective:1 range:2 bi:3 graduate:1 practical:1 responsible:1 acknowledgment:1 testing:2 practice:3 block:6 wasp:1 procedure:6 empirical:2 thought:2 boyd:1 suggest:1 varadarajan:1 close:5 selection:1 gelman:1 cartographic:1 applying:1 influence:1 center:3 modifies:1 attention:1 focused:2 coreset:68 insight:2 holmes:1 population:2 handle:1 notion:1 justification:2 limiting:1 resp:2 construction:12 commercial:1 annals:1 user:1 exact:1 vollmer:1 us:2 designing:1 expensive:2 xnd:1 muri:1 wang:1 capture:2 worst:4 thousand:1 calculate:3 news:1 indep:5 ran:2 substantial:2 intuition:1 pd:1 broderick:2 covariates:4 peled:1 ideally:1 dynamic:2 ov:4 tight:5 weakly:1 predictive:1 serve:2 distinctive:1 efficiency:1 easily:1 represented:1 distinct:1 effective:1 monte:5 artificial:5 choosing:3 heuristic:1 widely:2 solve:1 supplementary:2 loglikelihood:2 larger:2 compressed:3 statistic:4 gi:9 advantage:1 took:1 tran:1 remainder:1 j2:1 combining:1 validate:1 scalability:1 saksman:1 convergence:1 cluster:6 requirement:1 generating:5 adam:1 leave:1 spent:1 depending:1 develop:4 tall:1 eq:3 job:1 strong:2 implemented:1 entirety:1 ixture:2 direction:2 radius:3 bunched:1 closely:1 merged:3 modifying:2 stochastic:4 vc:1 material:2 require:2 assign:1 fix:1 investigation:1 proposition:9 adjusted:1 hold:1 sufficiently:1 exp:9 maclaurin:1 predict:2 slab:1 smallest:1 resample:1 outperformed:3 applicable:2 label:1 combinatorial:1 correctness:1 zn0:1 weighted:4 hoffman:1 mit:2 gaussian:3 aim:1 pn:5 incompatibility:1 varying:3 wilson:1 office:1 corollary:4 june:2 naval:1 notational:1 bernoulli:2 likelihood:18 contrast:1 rigorous:2 industrial:1 detect:1 burdensome:1 inference:27 streaming:10 typically:2 comprising:1 arg:1 classification:1 overall:1 fairly:1 marginal:4 construct:7 never:1 sampling:6 sohler:1 represents:1 bachem:2 nearly:3 discrepancy:2 future:2 develops:1 few:2 pathological:1 simultaneously:1 national:1 individual:3 subsampled:2 jhh:1 geometry:2 ourselves:1 replacement:2 n1:1 highly:1 adjust:1 deferred:1 mixture:5 kvk:1 behind:1 sens:1 inary:6 held:1 tj:1 chain:1 har:1 bregman:1 necessary:3 arthur:1 tweedie:1 tree:3 euclidean:1 desired:1 theoretical:4 minimal:1 cevher:1 modeling:1 soft:1 cover:1 zn:21 rabinovich:1 deviation:1 subset:10 hundred:3 uniform:1 examining:3 too:2 hem:4 kn:1 varies:2 synthetic:12 mala:2 density:2 fundamental:1 sensitivity:32 international:4 siam:1 csail:1 ancestral:1 together:1 analogously:1 concrete:1 aaai:1 nm:1 ndseg:1 choose:2 worse:2 creating:1 american:1 return:1 ebspam:8 coresets:23 satisfy:4 depends:2 piece:2 multiplicative:1 sup:1 characterizes:1 bayes:7 sort:1 parallel:5 formed:1 became:1 variance:2 who:1 efficiently:3 qk:1 correspond:1 bayesian:22 carlo:5 advertising:1 minj:1 trevor:1 failure:1 tamara:1 proof:3 di:1 gain:3 dataset:24 popular:1 recall:1 knowledge:1 ubiquitous:1 hassani:1 campbell:2 thiery:1 higher:1 nonparametrics:1 evaluated:1 though:4 box:1 done:1 furthermore:6 just:1 mar:2 web:1 su:1 lack:2 bonassi:1 logistic:18 quality:7 tdjc:1 scientific:1 effect:1 concept:1 true:3 normalized:2 hence:3 assigned:1 chemical:2 y12:1 laboratory:1 i2:1 attractive:1 conditionally:1 game:2 maintained:1 complete:1 demonstrate:5 l1:1 lucic:2 ranging:1 variational:9 common:3 superior:1 physical:1 exponentially:2 million:2 association:1 approximates:4 dinh:1 feldman:4 ai:2 gibbs:1 paisley:1 rd:10 consistency:1 pm:1 similarly:1 mathematics:1 specification:2 access:2 han:2 add:1 posterior:17 closest:1 recent:2 perspective:1 moderate:1 certain:1 binary:4 arbitrarily:2 continue:1 onr:1 yi:1 minimum:1 additional:2 george:2 redundant:4 full:6 needing:1 sphere:2 qi:2 ensuring:1 scalable:7 regression:18 essentially:2 expectation:1 metric:1 arxiv:4 iteration:7 sometimes:2 kernel:1 mmd:4 agarwal:1 dec:2 want:1 fellowship:1 krause:3 suspect:1 jordan:2 practitioner:1 chipman:1 leverage:2 yang:1 iii:1 faulkner:1 variety:1 affect:1 fit:2 restrict:1 idea:5 br:11 computable:1 whether:1 motivated:1 pca:2 recreate:1 defense:1 ultimate:1 vassilvitskii:1 effort:2 suffer:1 remark:1 useful:2 amount:1 repeating:1 ten:1 http:1 generate:1 percentage:1 rosenthal:1 broadly:1 discrete:3 redundancy:2 key:2 preprocessed:1 ce:1 bardenet:1 v1:3 fraction:2 blocker:1 run:4 realworld:1 uncertainty:4 place:1 almost:1 reasonable:2 family:1 vn:6 scaling:4 comparable:1 bound:23 guaranteed:1 nontrivial:2 constraint:2 simulate:1 extremely:1 relatively:1 according:1 ball:2 request:1 smaller:4 across:1 metropolis:2 making:3 modification:1 huggins:1 outlier:2 intuitively:2 computationally:3 ln:17 previously:2 discus:1 needed:1 tractable:1 end:3 available:2 operation:2 permit:1 apply:1 hierarchical:2 worthwhile:1 ype:4 batch:1 robustness:2 shortly:1 permission:1 schmidt:1 original:2 compress:1 clustering:18 subsampling:17 include:2 running:1 remaining:2 cf:1 k1:1 establish:1 approximating:1 society:1 implied:1 eact:4 quantity:2 spike:1 question:2 parametric:1 barycenter:1 gradient:3 separating:1 foray:1 cauchy:1 consensus:4 reason:1 code:1 providing:1 difficult:1 setup:1 dunson:1 robert:1 negative:2 design:1 implementation:1 teh:2 upper:10 observation:10 datasets:13 markov:1 finite:3 tbroderick:1 tilde:1 langevin:4 ever:1 y1:1 arbitrary:1 community:1 introduced:1 pair:1 required:2 specified:1 z1:2 vn0:3 barcelona:1 nip:1 address:3 able:2 bar:1 usually:1 xm:1 scott:3 challenge:1 summarize:1 including:3 memory:3 max:1 critical:3 natural:1 quantification:2 turning:1 mn:7 admixture:1 specializes:1 n6:1 prior:6 literature:1 geometric:1 python:1 betancourt:1 asymptotic:1 relative:3 expect:5 sublinear:1 limitation:1 proportional:1 versus:1 degree:1 sufficient:1 article:3 exciting:1 tiny:1 cd:2 genetics:1 supported:2 bern:2 infeasible:3 allow:1 understand:2 wide:4 characterizing:1 sparse:1 distributed:6 tolerance:1 calculated:1 xn:16 world:8 evaluating:1 rich:2 kz:2 dimension:1 author:1 default:1 adaptive:3 preprocessing:2 spam:1 far:1 welling:1 approximate:4 nov:1 keep:1 langberg:1 doucet:1 sequentially:1 n000141110688:1 corpus:2 continuous:1 quantifies:1 hockey:2 table:1 robust:3 obtaining:1 excellent:1 constructing:2 did:3 pk:3 apr:1 big:2 subsample:2 hyperparameters:3 fig:2 representative:2 en:3 originated:1 exponential:3 angelino:1 theorem:8 specific:1 showing:3 explored:1 dominates:1 exists:1 consist:1 burden:1 importance:4 gained:1 magnitude:3 occurring:2 logarithmic:1 led:1 simply:1 cheaply:1 satisfies:2 determines:1 acm:1 goal:2 kmeans:1 careful:1 shared:1 fisher:1 feasible:2 hard:1 included:1 specifically:1 typical:4 uniformly:4 except:2 hyperplane:2 acting:1 lemma:9 max0:1 called:3 total:5 experimental:1 exception:2 zg:1 select:1 rarely:1 indicating:1 jonathan:1 brevity:1 reactive:1 wibisono:1 mcmc:12 d1:1 phenomenon:1 srivastava:1 |
6,065 | 6,487 | Universal Correspondence Network
Christopher B. Choy
Stanford University
chrischoy@ai.stanford.edu
JunYoung Gwak
Stanford University
jgwak@ai.stanford.edu
Silvio Savarese
Stanford University
ssilvio@stanford.edu
Manmohan Chandraker
NEC Laboratories America, Inc.
manu@nec-labs.com
Abstract
We present a deep learning framework for accurate visual correspondences and
demonstrate its effectiveness for both geometric and semantic matching, spanning
across rigid motions to intra-class shape or appearance variations. In contrast
to previous CNN-based approaches that optimize a surrogate patch similarity
objective, we use deep metric learning to directly learn a feature space that preserves
either geometric or semantic similarity. Our fully convolutional architecture, along
with a novel correspondence contrastive loss allows faster training by effective
reuse of computations, accurate gradient computation through the use of thousands
of examples per image pair and faster testing with O(n) feed forward passes for
n keypoints, instead of O(n2 ) for typical patch similarity methods. We propose
a convolutional spatial transformer to mimic patch normalization in traditional
features like SIFT, which is shown to dramatically boost accuracy for semantic
correspondences across intra-class shape variations. Extensive experiments on
KITTI, PASCAL, and CUB-2011 datasets demonstrate the significant advantages
of our features over prior works that use either hand-constructed or learned features.
1
Introduction
Correspondence estimation is the workhorse that drives several fundamental problems in computer
vision, such as 3D reconstruction, image retrieval or object recognition. Applications such as
structure from motion or panorama stitching that demand sub-pixel accuracy rely on sparse keypoint
matches using descriptors like SIFT [22]. In other cases, dense correspondences in the form of stereo
disparities, optical flow or dense trajectories are used for applications such as surface reconstruction,
tracking, video analysis or stabilization. In yet other scenarios, correspondences are sought not
between projections of the same 3D point in different images, but between semantic analogs across
different instances within a category, such as beaks of different birds or headlights of cars. Thus, in
its most general form, the notion of visual correspondence estimation spans the range from low-level
feature matching to high-level object or scene understanding.
Traditionally, correspondence estimation relies on hand-designed features or domain-specific priors.
In recent years, there has been an increasing interest in leveraging the power of convolutional neural
networks (CNNs) to estimate visual correspondences. For example, a Siamese network may take a
pair of image patches and generate their similiarity as the output [1, 34, 35]. Intermediate convolution
layer activations from the above CNNs are also usable as generic features.
However, such intermediate activations are not optimized for the visual correspondence task. Such
features are trained for a surrogate objective function (patch similarity) and do not necessarily form a
metric space for visual correspondence and thus, any metric operation such as distance does not have
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Various types of correspondence problems have traditionally required different specialized methods:
for example, SIFT or SURF for sparse structure from motion, DAISY or DSP for dense matching, SIFT Flow or
FlowWeb for semantic matching. The Universal Correspondence Network accurately and efficiently learns a
metric space for geometric correspondences, dense trajectories or semantic correspondences.
explicit interpretation. In addition, patch similarity is inherently inefficient, since features have to be
extracted even for overlapping regions within patches. Further, it requires O(n2 ) feed-forward passes
to compare each of n patches with n other patches in a different image.
In contrast, we present the Universal Correspondence Network (UCN), a CNN-based generic discriminative framework that learns both geometric and semantic visual correspondences. Unlike many
previous CNNs for patch similarity, we use deep metric learning to directly learn the mapping, or
feature, that preserves similarity (either geometric or semantic) for generic correspondences. The
mapping is, thus, invariant to projective transformations, intra-class shape or appearance variations,
or any other variations that are irrelevant to the considered similarity. We propose a novel correspondence contrastive loss that allows faster training by efficiently sharing computations and effectively
encoding neighborhood relations in feature space. At test time, correspondence reduces to a nearest
neighbor search in feature space, which is more efficient than evaluating pairwise patch similarities.
The UCN is fully convolutional, allowing efficient generation of dense features. We propose an
on-the-fly active hard-negative mining strategy for faster training. In addition, we propose a novel
adaptation of the spatial transformer [13], called the convolutional spatial transformer, desgined to
make our features invariant to particular families of transformations. By learning the optimal feature
space that compensates for affine transformations, the convolutional spatial transformer imparts the
ability to mimic patch normalization of descriptors such as SIFT. Figure 1 illustrates our framework.
The capabilities of UCN are compared to a few important prior approaches in Table 1. Empirically,
the correspondences obtained from the UCN are denser and more accurate than most prior approaches
specialized for a particular task. We demonstrate this experimentally by showing state-of-the-art
performances on sparse SFM on KITTI, as well as dense geometric or semantic correspondences on
both rigid and non-rigid bodies in KITTI, PASCAL and CUB datasets.
To summarize, we propose a novel end-to-end system that optimizes a general correspondence
objective, independent of domain, with the following main contributions:
? Deep metric learning with an efficient correspondence constrastive loss for learning a feature
representation that is optimized for the given correspondence task.
? Fully convolutional network for dense and efficient feature extraction, along with fast active hard
negative mining.
? Fully convolutional spatial transformer for patch normalization.
? State-of-the-art correspondences across sparse SFM, dense matching and semantic matching,
encompassing rigid bodies, non-rigid bodies and intra-class shape or appearance variations.
2
Related Works
Correspondences Visual features form basic building blocks for many computer vision applications. Carefully designed features and kernel methods have influenced many fields such as structure
2
Figure 2: System overview: The network is fully convolutional, consisting of a series of convolutions,
pooling, nonlinearities and a convolutional spatial transformer, followed by channel-wise L2 normalization and
correspondence contrastive loss. As inputs, the network takes a pair of images and coordinates of corresponding
points in these images (blue: positive, red: negative). Features that correspond to the positive points (from both
images) are trained to be closer to each other, while features that correspond to negative points are trained to be
a certain margin apart. Before the last L2 normalization and after the FCNN, we placed a convolutional spatial
transformer to normalize patches or take larger context into account.
Features
SIFT [22]
DAISY [28]
Conv4 [21]
DeepMatching [25]
Patch-CNN [34]
LIFT [33]
Ours
Dense
7
3
3
3
3
7
3
Geometric Corr.
3
3
7
3
3
3
3
Semantic Corr.
7
7
3
7
7
7
3
Trainable
7
7
3
7
3
3
3
Efficient
3
3
3
7
7
3
3
Metric Space
7
7
7
3
7
3
3
Table 1: Comparison of prior state-of-the-art methods with UCN (ours). The UCN generates dense and accurate
correspondences for either geometric or semantic correspondence tasks. The UCN directly learns the feature
space to achieve high accuracy and has distinct efficiency advantages, as discussed in Section 3.
from motion, object recognition and image classification. Several hand-designed features, such as
SIFT, HOG, SURF and DAISY have found widespread applications [22, 3, 28, 8].
Recently, many CNN-based similarity measures have been proposed. A Siamese network is used in
[34] to measure patch similarity. A driving dataset is used to train a CNN for patch similarity in [1],
while [35] also uses a Siamese network for measuring patch similarity for stereo matching. A CNN
pretrained on ImageNet is analyzed for visual and semantic correspondence in [21]. Correspondences
are learned in [16] across both appearance and a global shape deformation by exploiting relationships
in fine-grained datasets. In contrast, we learn a metric space in which metric operations have direct
interpretations, rather than optimizing the network for patch similarity and using the intermediate
features. For this, we implement a fully convolutional architecture with a correspondence contrastive
loss that allows faster training and testing and propose a convolutional spatial transformer for local
patch normalization.
Metric learning using neural networks Neural networks are used in [5] for learning a mapping
where the Euclidean distance in the space preserves semantic distance. The loss function for learning
similarity metric using Siamese networks is subsequently formalized by [7, 12]. Recently, a triplet
loss is used by [29] for fine-grained image ranking, while the triplet loss is also used for face
recognition and clustering in [26]. Mini-batches are used for efficiently training the network in [27].
CNN invariances and spatial transformations A CNN is invariant to some types of transformations such as translation and scale due to convolution and pooling layers. However, explicitly
handling such invariances in forms of data augmentation or explicit network structure yields higher
accuracy in many tasks [17, 15, 13]. Recently, a spatial transformer network is proposed in [13] to
learn how to zoom in, rotate, or apply arbitrary transformations to an object of interest.
Fully convolutional neural network Fully connected layers are converted in 1 ? 1 convolutional
filters in [20] to propose a fully convolutional framework for segmentation. Changing a regular CNN
to a fully convolutional network for detection leads to speed and accuracy gains in [11]. Similar to
these works, we gain the efficiency of a fully convolutional architecture through reusing activations
3
Methods
Siamese Network
Triplet Loss
Contrastive Loss
Corres. Contrast. Loss
# examples per
image pair
# feed forwards
per test
1
2
1
> 103
O(N 2 )
O(N )
O(N )
O(N )
Table 2: Comparisons between metric learning methods for visual correspondence. Feature learning allows
Figure 3: Correspondence contrastive loss takes three faster test times. Correspondence contrastive loss alinputs: two dense features extracted from images and a lows us to use many more correspondences in one pair
correspondence table for positive and negative pairs.
of images than other methods.
for overlapping regions. Further, since number of training instances is much larger than number of
images in a batch, variance in the gradient is reduced, leading to faster training and convergence.
3
Universal Correspondence Network
We now present the details of our framework. Recall that the Universal Correspondence Network is
trained to directly learn a mapping that preserves similarity instead of relying on surrogate features.
We discuss the fully convolutional nature of the architecture, a novel correspondence contrastive
loss for faster training and testing, active hard negative mining, as well as the convolutional spatial
transformer that enables patch normalization.
Fully Convolutional Feature Learning To speed up training and use resources efficiently, we
implement fully convolutional feature learning, which has several benefits. First, the network can
reuse some of the activations computed for overlapping regions. Second, we can train several
thousand correspondences for each image pair, which provides the network an accurate gradient for
faster learning. Third, hard negative mining is efficient and straightforward, as discussed subsequently.
Fourth, unlike patch-based methods, it can be used to extract dense features efficiently from images
of arbitrary sizes.
During testing, the fully convolutional network is faster as well. Patch similarity based networks such
as [1, 34, 35] require O(n2 ) feed forward passes, where n is the number of keypoints in each image,
as compared to only O(n) for our network. We note that extracting intermediate layer activations as
a surrogate mapping is a comparatively suboptimal choice since those activations are not directly
trained on the visual correspondence task.
Correspondence Contrastive Loss Learning a metric space for visual correspondence requires
encoding corresponding points (in different views) to be mapped to neighboring points in the feature
space. To encode the constraints, we propose a generalization of the contrastive loss [7, 12], called
correspondence contrastive loss. Let FI (x) denote the feature in image I at location x = (x, y). The
loss function takes features from images I and I 0 , at coordinates x and x0 , respectively (see Figure 3).
If the coordinates x and x0 correspond to the same 3D point, we use the pair as a positive pair that
are encouraged to be close in the feature space, otherwise as a negative pair that are encouraged to be
at least margin m apart. We denote s = 1 for a positive pair and s = 0 for a negative pair. The full
correspondence contrastive loss is given by
L=
N
1 X
si kFI (xi ) ? FI 0 (xi 0 )k2 + (1 ? si ) max(0, m ? kFI (x) ? FI 0 (xi 0 )k)2
2N i
(1)
For each image pair, we sample correspondences from the training set. For instance, for KITTI
dataset, if we use each laser scan point, we can train up to 100k points in a single image pair. However
in practice, we used 3k correspondences to limit memory consumption. This allows more accurate
gradient computations than traditional contrastive loss, which yields one example per image pair.
We again note that the number of feed forward passes at test time is O(n) compared to O(n2 ) for
Siamese network variants [1, 35, 34]. Table 2 summarizes the advantages of a fully convolutional
architecture with correspondence contrastive loss.
Hard Negative Mining The correspondence contrastive loss in Eq. (1) consists of two terms. The
first term minimizes the distance between positive pairs and the second term pushes negative pairs to
be at least margin m away from each other. Thus, the second term is only active when the distance
between the features FI (xi ) and FI 0 (x0i ) are smaller than the margin m. Such boundary defines the
4
(a) SIFT
(b) Spatial transformer
(c) Convolutional spatial transformer
Figure 4: (a) SIFT normalizes for rotation and scaling. (b) The spatial transformer takes the whole image
as an input to estimate a transformation. (c) Our convolutional spatial transformer applies an independent
transformation to features.
metric space, so it is crucial to find the negatives that violate the constraint and train the network to
push the negatives away. However, random negative pairs do not contribute to training since they are
are generally far from each other in the embedding space.
Instead, we actively mine negative pairs that violate the constraints the most to dramatically speed up
training. We extract features from the first image and find the nearest neighbor in the second image.
If the location is far from the ground truth correspondence location, we use the pair as a negative. We
compute the nearest neighbor for all ground truth points on the first image. Such mining process is
time consuming since it requires O(mn) comparisons for m and n feature points in the two images,
respectively. Our experiments use a few thousand points for n, with m being all the features on the
second image, which is as large as 22000. We use a GPU implementation to speed up the K-NN
search [10] and embed it as a Caffe layer to actively mine hard negatives on-the-fly.
Convolutional Spatial Transformer CNNs are known to handle some degree of scale and rotation
invariances. However, handling spatial transformations explicitly using data-augmentation or a
special network structure have been shown to be more successful in many tasks [13, 15, 16, 17]. For
visual correspondence, finding the right scale and rotation is crucial, which is traditionally achieved
through patch normalization [23, 22]. A series of simple convolutions and poolings cannot mimic
such complex spatial transformations.
To mimic patch normalization, we borrow the idea of the spatial transformer layer [13]. However,
instead of a global image transformation, each keypoint in the image can undergo an independent
transformation. Thus, we propose a convolutional version to generate the transformed activations,
called the convolutional spatial transformer. As demonstrated in our experiments, this is especially
important for correspondences across large intra-class shape variations.
The proposed transformer takes its input from a lower layer and for each output feature, applies an
independent spatial transformation. The transformation parameters are also extracted convolutionally.
Since they go through an independent transformation, the transformed activations are placed inside
a larger activation without overlap and then go through a successive convolution with the stride to
combine the transformed activations independently. The stride size has to be equal to the size of the
spatial transformer kernel size. Figure 4 illustrates the convolutional spatial transformer module.
4
Experiments
We use Caffe [14] package for implementation. Since it does not support the new layers we propose,
we implement the correspondence contrastive loss layer and the convolutional spatial transformer
layer, the K-NN layer based on [10] and the channel-wise L2 normalization layer. We did not use
flattening layer nor the fully connected layer to make the network fully convolutional, generating
features at every fourth pixel. For accurate localization, we then extract features densely using
bilinear interpolation to mitigate quantization error for sparse correspondences. Please refer to the
supplementary materials for the network implementation details and visualization.
For each experiment setup, we train and test three variations of networks. First, the network has
hard negative mining and spatial transformer (Ours-HN-ST). Second, the same network without
spatial transformer (Ours-HN). Third, the same network without spatial transformer and hard negative
mining, providing random negative samples that are at least certain pixels apart from the ground
5
method
SIFT-NN [22] HOG-NN [8] SIFT-flow [19] DaisyFF [31] DSP [18] DM best (1/2) [25] Ours-HN Ours-HN-ST
MPI-Sintel
68.4
71.2
89.0
87.3
85.3
89.2
91.5
90.7
KITTI
48.9
53.7
67.3
79.6
58.0
85.6
86.5
83.4
Table 3: Matching performance PCK@10px on KITTI Flow 2015 [24] and MPI-Sintel [6]. Note that DaisyFF,
DSP, DM use global optimization whereas we only use the raw correspondences from nearest neighbor matches.
(a) PCK performance for dense features NN
(b) PCK performance on keypoints NN
Figure 5: Comparison of PCK performance on KITTI raw dataset (a) PCK performance of the densely extracted
feature nearest neighbor (b) PCK performance for keypoint features nearest neighbor and the dense CNN feature
nearest neighbor
(a) Original image pair and keypoints
(b) SIFT [22] NN matches
(c) DAISY [28] NN matches
(d) Ours-HN NN matches
Figure 6: Visualization of nearest neighbor (NN) matches on KITTI images (a) from top to bottom, first and
second images and FAST keypoints and dense keypoints on the first image (b) NN of SIFT matches on second
image. (c) NN of dense DAISY matches on second image. (d) NN of our dense UCN matches on second image.
truth correspondence location instead (Ours-RN). With this configuration of networks, we verify the
effectiveness of each component of Universal Correspondence Network.
Datasets and Metrics We evaluate our UCN on three different tasks: geometric correspondence,
semantic correspondence and accuracy of correspondences for camera localization. For geometric
correspondence (matching images of same 3D point in different views), we use two optical flow
datasets from KITTI 2015 Flow benchmark and MPI Sintel dataset and split their training set into
a training and a validation set individually. The exact splits are available on the project website.
alidation For semantic correspondences (finding the same functional part from different instances),
we use the PASCAL-Berkeley dataset with keypoint annotations [9, 4] and a subset used by FlowWeb
[36]. We also compare against prior state-of-the-art on the Caltech-UCSD Bird dataset[30]. To test the
accuracy of correspondences for camera motion estimation, we use the raw KITTI driving sequences
which include Velodyne scans, GPS and IMU measurements. Velodyne points are projected in
successive frames to establish correspondences and any points on moving objects are removed.
To measure performance, we use the percentage of correct keypoints (PCK) metric [21, 36, 16] (or
equivalently ?accuracy@T? [25]). We extract features densely or on a set of sparse keypoints (for
semantic correspondence) from a query image and find the nearest neighboring feature in the second
image as the predicted correspondence. The correspondence is classified as correct if the predicted
keypoint is closer than T pixels to ground-truth (in short, PCK@T ). Unlike many prior works, we
do not apply any post-processing, such as global optimization with an MRF. This is to capture the
performance of raw correspondences from UCN, which already surpasses previous methods.
Geometric Correspondence We pick random 1000 correspondences in each KITTI or MPI Sintel
image during training. We consider a correspondence as a hard negative if the nearest neighbor in
6
conv4 flow
SIFT flow
NN transfer
Ours RN
Ours HN
Ours HN-ST
aero bike bird boat
28.2 34.1 20.4 17.1
27.6 30.8 19.9 17.5
18.3 24.8 14.5 15.4
31.5 19.6 30.1 23.0
36.0 26.5 31.9 31.3
37.7 30.1 42.0 31.7
bottle bus car cat
50.6 36.7 20.9 19.6
49.4 36.4 20.7 16.0
48.1 27.6 16.0 11.1
53.5 36.7 34.0 33.7
56.4 38.2 36.2 34.0
62.6 35.4 38.0 41.7
chair cow
15.7 25.4
16.1 25.0
12.0 16.8
22.2 28.1
25.5 31.7
27.5 34.0
table dog
12.7 18.7
16.1 16.3
15.7 12.7
12.8 33.9
18.1 35.7
17.3 41.9
horse
25.9
27.7
20.2
29.9
32.1
38.0
mbike
23.1
28.3
18.5
23.4
24.8
24.4
person plant
21.4 40.2
20.2 36.4
18.7 33.4
38.4 39.8
41.4 46.0
47.1 52.5
sheep
21.1
20.5
14.0
38.6
45.3
47.5
sofa train
14.5 18.3
17.2 19.9
15.5 14.6
17.6 28.4
15.4 28.2
18.5 40.2
tv mean
33.3 24.9
32.9 24.7
30.0 19.9
60.2 36.0
65.3 38.6
70.5 44.0
Table 4: Per-class PCK on PASCAL-Berkeley correspondence dataset [4] (? = 0.1, L = max(w, h)).
Query
Ground Truth
Ours HN-ST
VGG conv4_3 NN
Query
Ground Truth
Ours HN-ST
VGG conv4_3 NN
Figure 7: Qualitative semantic correspondence results on PASCAL [9] correspondences with Berkeley
keypoint annotation [4] and Caltech-UCSD Bird dataset [30].
the feature space is more than 16 pixels away from the ground truth correspondence. We used the
same architecture and training scheme for both datasets. Following convention [25], we measure
PCK at 10 pixel threshold and compare with the state-of-the-art methods on Table 3. SIFT-flow [19],
DaisyFF [31], DSP [18], and DM best [25] use additional global optimization to generate more
accurate correspondences. On the other hand, just our raw correspondences outperform all the
state-of-the-art methods. We note that the spatial transformer does not improve performance in this
case, likely due to overfitting to a smaller training set. As we show in the next experiments, its
benefits are more apparent with a larger-scale dataset and greater shape variations. Note that though
we used stereo datasets to generate a large number of correspondences, the result is not directly
comparable to stereo methods without a global optimization and epipolar geometry to filter out the
noise and incorporate edges.
We also used KITTI raw sequences to generate a large number of correspondences, and we split
different sequences into train and test sets. The details of the split is on the supplementary material.
We plot PCK for different thresholds for various methods with densely extracted features on the larger
KITTI raw dataset in Figure 5a. The accuracy of our features outperforms all traditional features
including SIFT [22], DAISY [28] and KAZE [2]. Due to dense extraction at the original image scale
without rotation, SIFT does not perform well. So, we also extract all features except ours sparsely on
SIFT keypoints and plot PCK curves in Figure 5b. All the prior methods improve (SIFT dramatically
so), but our UCN features still perform significantly better even with dense extraction. Also note
the improved performance of the convolutional spatial transformer. PCK curves for geometric
correspondences on individual semantic classes such as road or car are in supplementary material.
Semantic Correspondence The UCN can also learn semantic correspondences invariant to intraclass appearance or shape variations. We independently train on the PASCAL dataset [9] with various
annotations [4, 36] and on the CUB dataset [30], with the same network architecture.
We again use PCK as the metric [32]. To account for variable image size, we consider a predicted
keypoint to be correctly matched if it lies within Euclidean distance ? ? L of the ground truth keypoint,
where L is the size of the image and 0 < ? < 1 is a variable we control. For comparison, our
definition of L varies depending on the baseline. Since intraclass correspondence alignment is a
difficult task, preceding works use either geometric [18] or learned [16] spatial priors. However, even
our raw correspondences, without spatial priors, achieve stronger results than previous works.
As shown in Table 4 and 5, our approach outperforms that of Long et al.[21] by a large margin on the
PASCAL dataset with Berkeley keypoint annotation, for most classes and also overall. Note that our
7
mean
conv4 flow[21]
SIFT flow
fc7 NN
ours-RN
ours-HN
ours-HN-ST
? = 0.1
24.9
24.7
19.9
36.0
38.6
44.0
? = 0.05
11.8
10.9
7.8
21.0
23.2
25.9
? = 0.025
4.08
3.55
2.35
11.5
13.1
14.4
Table 5: Mean PCK on PASCAL-Berkeley correspondence dataset [4] (L = max(w, h)). Even
without any global optimization, our nearest neighbor search outperforms all methods by a large margin.
Figure 8: PCK on CUB dataset [30], compared with
various
other approaches including WarpNet [16] (L =
?
w2 + h2 .)
Features
SIFT [22] DAISY [28] SURF [3] KAZE [2] Agrawal et al. [1] Ours-HN Ours-HN-ST
Ang. Dev. (deg)
0.307
0.309
0.344
0.312
0.394
0.317
0.325
Trans. Dev.(deg) 4.749
4.516
5.790
4.584
9.293
4.147
4.728
Table 6: Essential matrix decomposition performance using various features. The performance is measured as
angular deviation from the ground truth rotation and the angle between predicted translation and the ground
truth translation. All features generate very accurate estimation.
result is purely from nearest neighbor matching, while [21] uses global optimization too. We also
train and test UCN on the CUB dataset [30], using the same cleaned test subset as WarpNet [16]. As
shown in Figure 8, we outperform WarpNet by a large margin. However, please note that WarpNet is
an unsupervised method. Please see Figure 7 for qualitative matches. Results on FlowWeb datasets
are in supplementary material, with similar trends.
Finally, we observe that there is a significant performance improvement obtained through use of
the convolutional spatial transformer, in both PASCAL and CUB datasets. This shows the utility of
estimating an optimal patch normalization in the presence of large shape deformations.
Camera Motion Estimation We use KITTI raw sequences to get more training examples for this
task. To augment the data, we randomly crop and mirror the images and to make effective use of our
fully convolutional structure, we use large images to train thousands of correspondences at once.
We establish correspondences with nearest neighbor matching, use RANSAC to estimate the essential
matrix and decompose it to obtain the camera motion. Among the four candidate rotations, we choose
the one with the most inliers as the estimate Rpred , whose angular
deviation with respect to the
>
ground truth Rgt is reported as ? = arccos (Tr (Rpred Rgt ) ? 1)/2 . Since translation may only be
estimated up to scale, we report the angular deviation between unit vectors along the estimated and
ground truth translation from GPS-IMU.
In Table 6, we list decomposition errors for various features. Note that sparse features such as SIFT are
designed to perform well in this setting, but our dense UCN features are still quite competitive. Note
that intermediate features such as [1] learn to optimize patch similarity, thus, our UCN significantly
outperforms them since it is trained directly on the correspondence task.
5
Conclusion
We have proposed a novel deep metric learning approach to visual correspondence, that is shown to be
advantageous over approaches that optimize a surrogate patch similarity objective. We propose several
innovations, such as a correspondence contrastive loss in a fully convolutional architecture, on-the-fly
active hard negative mining and a convolutional spatial transformer. These lend capabilities such as
more efficient training, accurate gradient computations, faster testing and local patch normalization,
which lead to improved speed or accuracy. We demonstrate in experiments that our features perform
better than prior state-of-the-art on both geometric and semantic correspondence tasks, even without
using any spatial priors or global optimization. In future work, we will explore applications for rigid
and non-rigid motion or shape estimation as well as applying global optimization towards applications
such as optical flow or dense stereo.
Acknowledgments
This work was part of C. Choy?s internship at NEC Labs. We acknowledge the support of Korea
Foundation of Advanced Studies, Toyota Award #122282, ONR N00014-13-1-0761, and MURI
WF911NF-15-1-0479.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
P. Agrawal, J. Carreira, and J. Malik. Learning to See by Moving. In ICCV, 2015.
P. F. Alcantarilla, A. Bartoli, and A. J. Davison. Kaze features. In ECCV, 2012.
H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool. Speeded-up robust features (SURF). CVIU, 2008.
L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3d pose annotations. In ICCV, 2009.
J. Bromley, I. Guyon, Y. Lecun, E. S?ckinger, and R. Shah. Signature verification using a Siamese time
delay neural network. In NIPS, 1994.
D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black. A naturalistic open source movie for optical flow
evaluation. In ECCV, 2012.
S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to
face verification. In CVPR, volume 1, June 2005.
N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes Challenge 2011 (VOC2011) Results.
V. Garcia, E. Debreuve, F. Nielsen, and M. Barlaud. K-nearest neighbor search: Fast gpu-based implementations and application to high-dimensional feature matching. In ICIP, 2010.
R. Girshick. Fast R-CNN. ArXiv e-prints, Apr. 2015.
R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In
CVPR, 2006.
M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial Transformer Networks. NIPS,
2015.
Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
H. Kaiming, Z. Xiangyu, R. Shaoqing, and J. Sun. Spatial pyramid pooling in deep convolutional networks
for visual recognition. In ECCV, 2014.
A. Kanazawa, D. W. Jacobs, and M. Chandraker. WarpNet: Weakly Supervised Matching for Single-view
Reconstruction. ArXiv e-prints, Apr. 2016.
A. Kanazawa, A. Sharma, and D. Jacobs. Locally Scale-invariant Convolutional Neural Network. In Deep
Learning and Representation Learning Workshop: NIPS, 2014.
J. Kim, C. Liu, F. Sha, and K. Grauman. Deformable spatial pyramid matching for fast dense correspondences. In CVPR. IEEE, 2013.
C. Liu, J. Yuen, and A. Torralba. Sift flow: Dense correspondence across scenes and its applications. PAMI,
33(5), May 2011.
J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. CVPR,
2015.
J. Long, N. Zhang, and T. Darrell. Do convnets learn correspondence? In NIPS, 2014.
D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
J. Matas, O. Chum, M. Urban, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal
regions. In BMVC, 2002.
M. Menze and A. Geiger. Object scene flow for autonomous vehicles. In CVPR, 2015.
J. Revaud, P. Weinzaepfel, Z. Harchaoui, and C. Schmid. DeepMatching: Hierarchical Deformable Dense
Matching. Oct. 2015.
F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and
clustering. In CVPR, 2015.
H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature
embedding. In Computer Vision and Pattern Recognition (CVPR), 2016.
E. Tola, V. Lepetit, and P. Fua. DAISY: An Efficient Dense Descriptor Applied to Wide Baseline Stereo.
PAMI, 2010.
J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. Learning fine-grained
image similarity with deep ranking. In CVPR, 2014.
P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200.
Technical Report CNS-TR-2010-001, California Institute of Technology, 2010.
H. Yang, W. Y. Lin, and J. Lu. DAISY filter flow: A generalized approach to discrete dense correspondences.
In CVPR, 2014.
Y. Yang and D. Ramanan. Articulated human detection with flexible mixtures of parts. PAMI, 2013.
K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. LIFT: Learned Invariant Feature Transform. In ECCV, 2016.
S. Zagoruyko and N. Komodakis. Learning to Compare Image Patches via Convolutional Neural Networks.
CVPR, 2015.
J. Zbontar and Y. LeCun. Computing the stereo matching cost with a CNN. In CVPR, 2015.
T. Zhou, Y. Jae Lee, S. X. Yu, and A. A. Efros. Flowweb: Joint image set alignment by weaving consistent,
pixel-wise correspondences. In CVPR, June 2015.
9
| 6487 |@word cnn:12 version:1 dalal:1 stronger:1 advantageous:1 everingham:1 triggs:1 open:1 choy:2 decomposition:2 jacob:2 contrastive:17 pick:1 tr:2 lepetit:2 reduction:1 configuration:1 series:2 disparity:1 liu:2 ours:19 outperforms:4 guadarrama:1 com:1 activation:10 yet:1 si:2 gpu:2 shape:10 enables:1 designed:4 plot:2 website:1 es:1 short:1 davison:1 provides:1 contribute:1 location:4 successive:2 zhang:1 along:3 constructed:1 direct:1 qualitative:2 consists:1 ijcv:1 combine:1 inside:1 x0:2 pairwise:1 nor:1 relying:1 increasing:1 spain:1 project:1 matched:1 estimating:1 bike:1 minimizes:1 unified:1 finding:2 transformation:15 mitigate:1 every:1 berkeley:5 grauman:1 k2:1 control:1 unit:1 ramanan:1 positive:6 before:1 local:2 limit:1 bilinear:1 encoding:2 interpolation:1 pami:3 black:1 bird:5 branson:1 projective:1 range:1 kfi:2 speeded:1 acknowledgment:1 camera:4 lecun:4 testing:5 practice:1 block:1 implement:3 universal:6 significantly:2 matching:16 projection:1 road:1 regular:1 get:1 cannot:1 close:1 naturalistic:1 context:1 transformer:29 applying:1 optimize:3 demonstrated:1 straightforward:1 go:2 williams:1 conv4:3 independently:2 hadsell:2 formalized:1 borrow:1 embedding:4 handle:1 notion:1 variation:9 traditionally:3 coordinate:3 autonomous:1 exact:1 gps:2 us:2 trend:1 recognition:6 sparsely:1 muri:1 bottom:1 module:1 fly:3 aero:1 preprint:1 capture:1 wang:2 thousand:4 region:4 revaud:1 connected:2 sun:1 removed:1 kalenichenko:1 mine:2 signature:1 trained:7 weakly:1 purely:1 localization:2 distinctive:1 efficiency:2 joint:1 various:6 america:1 cat:1 train:10 laser:1 distinct:1 fast:6 effective:2 articulated:1 query:3 horse:1 lift:2 neighborhood:1 caffe:3 apparent:1 whose:1 stanford:6 larger:5 denser:1 supplementary:4 quite:1 otherwise:1 cvpr:13 compensates:1 ability:1 simonyan:1 tuytelaars:1 transform:1 advantage:3 sequence:4 agrawal:2 karayev:1 propose:11 reconstruction:3 adaptation:1 neighboring:2 barlaud:1 achieve:2 deformable:2 intraclass:2 normalize:1 exploiting:1 convergence:1 darrell:3 generating:1 kitti:14 object:7 depending:1 bourdev:1 pose:1 measured:1 x0i:1 nearest:14 eq:1 predicted:4 convention:1 poselets:1 correct:2 cnns:4 subsequently:2 filter:3 stabilization:1 human:2 material:4 require:1 generalization:1 decompose:1 yuen:1 considered:1 ground:12 bromley:1 mapping:6 driving:2 efros:1 sought:1 torralba:1 cub:6 estimation:7 sofa:1 schroff:2 extremal:1 individually:1 rather:1 zhou:1 lifted:1 rosenberg:1 encode:1 june:2 dsp:4 improvement:1 contrast:4 baseline:3 kim:1 flowweb:4 rigid:7 nn:17 leung:1 perona:1 relation:1 transformed:3 pixel:7 overall:1 classification:1 among:1 pascal:10 augment:1 bartoli:1 flexible:1 arccos:1 spatial:37 art:7 special:1 weinzaepfel:1 field:1 equal:1 once:1 extraction:3 encouraged:2 ckinger:1 yu:1 unsupervised:1 alidation:1 mimic:4 future:1 report:2 few:2 randomly:1 oriented:1 preserve:4 densely:4 zoom:1 individual:1 geometry:1 consisting:1 cns:1 detection:3 interest:2 mining:9 intra:5 evaluation:1 sheep:1 alignment:2 analyzed:1 mixture:1 inliers:1 accurate:10 edge:1 closer:2 korea:1 savarese:2 euclidean:2 deformation:2 mbike:1 girshick:2 instance:4 dev:2 measuring:1 cost:1 surpasses:1 subset:2 deviation:3 delay:1 successful:1 welinder:1 too:1 reported:1 varies:1 st:7 person:1 fundamental:1 lee:1 augmentation:2 again:2 hn:13 choose:1 zbontar:1 usable:1 inefficient:1 leading:1 reusing:1 actively:2 account:2 converted:1 nonlinearities:1 stride:2 inc:1 explicitly:2 ranking:2 vehicle:1 view:3 lowe:1 lab:2 philbin:2 red:1 competitive:1 capability:2 annotation:5 jia:1 daisy:9 contribution:1 accuracy:10 convolutional:42 descriptor:3 variance:1 efficiently:5 correspond:3 yield:2 poolings:1 raw:9 kavukcuoglu:1 accurately:1 lu:1 trajectory:2 drive:1 classified:1 detector:1 influenced:1 sharing:1 beak:1 definition:1 against:1 fcnn:1 internship:1 dm:3 gain:2 dataset:16 recall:1 car:3 dimensionality:1 stanley:1 segmentation:2 nielsen:1 carefully:1 feed:5 higher:1 supervised:1 zisserman:2 improved:2 maximally:1 bmvc:1 fua:2 though:1 just:1 angular:3 convnets:1 hand:4 christopher:1 overlapping:3 widespread:1 defines:1 building:1 verify:1 laboratory:1 semantic:23 komodakis:1 during:2 please:3 mpi:4 generalized:1 pck:16 demonstrate:4 workhorse:1 motion:8 image:50 wise:3 novel:6 recently:3 fi:5 rotation:6 specialized:2 functional:1 empirically:1 overview:1 volume:1 analog:1 interpretation:2 discussed:2 significant:2 refer:1 measurement:1 ai:2 gwak:1 moving:2 stable:1 similarity:21 surface:1 fc7:1 recent:1 optimizing:1 irrelevant:1 optimizes:1 apart:3 scenario:1 menze:1 certain:2 n00014:1 onr:1 yi:1 caltech:3 additional:1 greater:1 preceding:1 xiangyu:1 sharma:1 siamese:7 full:1 keypoints:10 reduces:1 violate:2 harchaoui:1 technical:1 faster:11 match:10 convolutionally:1 long:4 retrieval:1 lin:1 post:1 award:1 imparts:1 variant:1 basic:1 mrf:1 crop:1 vision:3 metric:20 ransac:1 arxiv:4 histogram:1 normalization:12 kernel:2 pyramid:2 achieved:1 addition:2 whereas:1 fine:3 winn:1 source:1 crucial:2 w2:1 zagoruyko:1 unlike:3 pass:4 pooling:3 headlight:1 undergo:1 flow:16 leveraging:1 effectiveness:2 extracting:1 chopra:2 presence:1 manu:1 intermediate:5 split:4 yang:2 architecture:9 suboptimal:1 cow:1 idea:1 vgg:2 utility:1 reuse:2 stereo:8 song:2 trulls:1 shaoqing:1 deep:9 dramatically:3 generally:1 ang:1 locally:1 category:1 reduced:1 generate:6 outperform:2 percentage:1 chum:1 estimated:2 per:5 correctly:1 blue:1 discrete:1 four:1 threshold:2 urban:1 changing:1 year:1 package:1 angle:1 fourth:2 family:1 guyon:1 wu:1 patch:30 geiger:1 sintel:4 summarizes:1 scaling:1 sfm:2 conv4_3:2 comparable:1 layer:14 followed:1 correspondence:99 constraint:3 scene:3 generates:1 speed:5 span:1 chair:1 optical:4 px:1 structured:1 tv:1 across:7 smaller:2 invariant:8 iccv:2 resource:1 visualization:2 bus:1 discus:1 weaving:1 stitching:1 end:2 available:1 operation:2 constrastive:1 apply:2 observe:1 hierarchical:1 away:3 generic:3 batch:2 shah:1 original:2 imu:2 top:1 clustering:2 include:1 especially:1 establish:2 comparatively:1 objective:4 malik:2 already:1 print:2 manmohan:1 matas:1 strategy:1 sha:1 traditional:3 surrogate:5 gradient:6 distance:6 mapped:1 warpnet:5 consumption:1 spanning:1 relationship:1 mini:1 providing:1 innovation:1 equivalently:1 setup:1 difficult:1 hog:2 pajdla:1 negative:22 implementation:4 perform:4 allowing:1 convolution:5 datasets:9 benchmark:1 acknowledge:1 frame:1 rn:3 ucsd:3 arbitrary:2 pair:21 required:1 bottle:1 extensive:1 optimized:2 imagenet:1 dog:1 cleaned:1 icip:1 wah:1 california:1 learned:4 boost:1 barcelona:1 nip:5 trans:1 pattern:1 summarize:1 challenge:1 max:3 memory:1 video:1 epipolar:1 including:2 power:1 overlap:1 lend:1 gool:2 rely:1 boat:1 advanced:1 mn:1 scheme:1 improve:2 movie:1 technology:1 keypoint:9 tola:1 extract:5 schmid:1 prior:12 geometric:14 understanding:1 l2:3 xiang:1 fully:21 loss:24 encompassing:1 plant:1 discriminatively:1 generation:1 validation:1 h2:1 foundation:1 shelhamer:2 degree:1 affine:1 verification:2 jegelka:1 consistent:1 translation:5 normalizes:1 eccv:4 placed:2 last:1 institute:1 neighbor:13 wide:2 face:3 sparse:7 benefit:2 van:2 boundary:1 curve:2 evaluating:1 forward:5 projected:1 far:2 jaderberg:1 deg:2 chandraker:2 active:5 global:10 overfitting:1 belongie:1 consuming:1 discriminative:1 xi:4 butler:1 search:4 triplet:3 bay:1 table:13 learn:8 channel:2 nature:1 transfer:1 inherently:1 robust:2 necessarily:1 complex:1 domain:2 voc2011:1 surf:4 did:1 dense:26 main:1 flattening:1 apr:2 whole:1 noise:1 n2:4 jae:1 wulff:1 body:4 junyoung:1 sub:1 explicit:2 similiarity:1 lie:1 candidate:1 third:2 toyota:1 learns:3 grained:3 donahue:1 embed:1 specific:1 sift:23 showing:1 list:1 essential:2 kanazawa:2 quantization:1 workshop:1 effectively:1 corr:2 mirror:1 nec:3 illustrates:2 push:2 demand:1 margin:7 cviu:1 chen:1 garcia:1 appearance:5 likely:1 explore:1 visual:15 tracking:1 kaiming:1 pretrained:1 applies:2 truth:12 relies:1 extracted:5 oct:1 towards:1 hard:10 experimentally:1 carreira:1 typical:1 except:1 panorama:1 silvio:1 called:3 invariance:3 support:2 rotate:1 scan:2 facenet:1 incorporate:1 evaluate:1 mita:1 trainable:1 handling:2 |
6,066 | 6,488 | Protein contact prediction from amino acid
co-evolution using convolutional networks for
graph-valued images
Vladimir Golkov1 , Marcin J. Skwark2 , Antonij Golkov3 , Alexey Dosovitskiy4 ,
Thomas Brox4 , Jens Meiler2 , and Daniel Cremers1
1
Technical University of Munich, Germany
2
Vanderbilt University, Nashville, TN, USA
3
University of Augsburg, Germany
4
University of Freiburg, Germany
golkov@cs.tum.edu, marcin@skwark.pl, antonij.golkov@student.uni-augsburg.de,
{dosovits,brox}@cs.uni-freiburg.de, jens.meiler@vanderbilt.edu, cremers@tum.de
Abstract
Proteins are responsible for most of the functions in life, and thus are the central
focus of many areas of biomedicine. Protein structure is strongly related to protein function, but is difficult to elucidate experimentally, therefore computational
structure prediction is a crucial task on the way to solve many biological questions.
A contact map is a compact representation of the three-dimensional structure of a
protein via the pairwise contacts between the amino acids constituting the protein.
We use a convolutional network to calculate protein contact maps from detailed
evolutionary coupling statistics between positions in the protein sequence. The
input to the network has an image-like structure amenable to convolutions, but every ?pixel? instead of color channels contains a bipartite undirected edge-weighted
graph. We propose several methods for treating such ?graph-valued images? in a
convolutional network. The proposed method outperforms state-of-the-art methods
by a considerable margin.
1
Introduction
Proteins perform most of the functions in the cells of living organisms, acting as enzymes to perform
complex chemical reactions, recognizing foreign particles, conducting signals, and building cell
scaffolds ? to name just a few. Their function is dictated by their three-dimensional structure, which
can be quite involved, despite the fact that proteins are linear polymers composed of only 20 different
types of amino acids. The sequence of amino acids dictates the three-dimensional structure and
related proteins share both structure and function. Predicting protein structure from amino acid
sequence remains a problem that is still largely unsolved.
1.1
Protein structure and contact maps
The primary structure of a protein refers to the linear sequence of the amino acid residues that
constitute the protein, as encoded by the corresponding gene. During or after its biosynthesis, a
protein spatially folds into an energetically favourable conformation. Locally it folds into so-called
secondary structure (?-helices and ?-strands). The global three-dimensional structure into which the
entire protein folds is referred to as the tertiary structure. Fig. 1a depicts the tertiary structure of a
protein consisting of several ?-helices.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Protein structure is mediated and stabilized by series of weak interactions (physical contacts) between
pairs of its amino acids. Let L be the length of the sequence of a protein (i.e. the number of its amino
acids). The tertiary structure can be partially summarized as a so-called contact map ? a sparse L ? L
matrix C encoding the presence or absence of physical contact between all pairs of L amino acid
residues of a protein. The entry Ci,j is equal to 1 if residues i and j are in contact and 0 if they are
not. Intermediate values may encode different levels of contact likeliness.
We use these intermediate values without rounding where possible because they hold additional
information. The ?contact likeliness? is a knowledge-based function derived from Protein Data
Bank, dependent on the distance between C? atoms of involved amino acids and their type. It has
been parametrized based on the amino acids? heavy atoms making biophysically feasible contact in
experimentally determined structures.
(a) Tertiary structure
(c) Variants of contact
(b) Contact map
(d) Co-evolution statistics
Figure 1: Oxymyoglobin (a) and its contact between amino acid residue 6 and 133. Helix?helix
contacts correspond to ?checkerboard? patterns in the contact map (b). Various variants of the contact
6/133 encountered in nature (native pose in upper left, remaining poses are theoretical models) (c)
are reflected in the co-evolution statistics (d).
2
Methods
The proposed method is based on inferring direct co-evolutionary couplings between pairs of amino
acids of a protein, and predicting the contact map from them using a convolutional neural network.
2.1
Multiple sequence alignments
As of today the UniProt Archive (UniParc [1]) consists of approximately 130 million different protein
sequences. This is only a small fraction of all the protein sequences existing on Earth, whose number
is estimated to be on the order of 1010 to 1012 [2]. Despite this abundance, there exist only about
105 sequence families, which in turn adopt one of about 104 folds [2]. This is due to the fact that
homologous proteins (proteins originating from common ancestors) are similar in terms of their
structure and function. Homologs are under evolutionary pressure to maintain the structure and
function of the ancestral protein, while at the same time adapting to the changes in the environment.
Evolutionarily related proteins can be identified by means of homology search using dynamic
programming, hidden Markov models, and other statistical models, which group homologous proteins
into so-called multiple sequence alignments. A multiple sequence alignment consists of sequences
of related proteins, aligned such that corresponding amino acids share the same position (column).
The 20 amino acid types are represented by the letters A,C,D,E,F,G,H,I,K,L,M,N,P,Q,R,S,T,V,W,Y.
Besides, a ?gap? (represented as ???) is used as a 21st character to account for insertions and deletions.
For the purpose of this work, all the input alignments have been generated with jackhmmer, part
of HMMER package (version 3.1b2, http://hmmer.org) run against the UniParc database released
in summer 2015. The alignment has been constructed with the E-value inclusion threshold of 1,
allowing for inclusion of distant homologs, at a risk of contaminating the alignment with potentially
evolutionarily unrelated sequences. The resultant multiple sequence alignments have not been
modified in any way, except for removal of inserts (positions that were not present in the protein
sequence of interest). Notably, contrary to many evolutionary approaches, we did not remove columns
that (a) contained many gaps, (b) were too diverse or (c) were too conserved. In so doing, we emulated
a fully automated prediction regime.
2
2.2
Potts model for co-evolution of amino acid residues
Protein structure is stabilized by series of contacts: weak, favourable interactions between amino acids
adjacent in space (but not necessarily in sequence). If an amino acid becomes mutated in the course
of evolution, breaking a favourable contact, there is an evolutionary pressure for a compensating
mutation to occur in the interacting partner(s) to restore the protein to an unfrustrated state. These
pressures lead to amino acid pairs varying in tandem in the multiple sequence alignments. The
observed covariances can subsequently be used to predict which of the positions in the protein
sequence are close together in space.
The directly observed covariances are by themselves a poor predictor of inter-residue contact. This
is due to transitivity of correlations in multiple sequence alignments. When an amino acid A that
is in contact with amino acids B and C mutates to A?, it exerts a pressure for B and C to adopt to
this mutation, leading to a spurious, indirect correlation between B and C. Oftentimes these spurious
correlations are more prominent than the actual, direct ones. This problem can be modelled in terms
of one- and two-body interactions, analogous to the Ising model of statistical mechanics (or its
generalization ? the Potts model). Solving an inverse Ising/Potts problem (inferring direct causes
from a set of observations), while not feasible analytically, can be accomplished by approximate,
numerical algorithms. Such approaches have been recently successfully applied to the problem of
protein contact prediction [3, 4].
One of the most widely-adopted approaches to this problem is pseudolikelihood maximization for
inferring an inverse Potts model (plmDCA [3, 5]). It results in an L ? L ? 21 ? 21 array of inferred
evolutionary couplings between pairs of the L positions in the protein, described in terms of 21 ? 21
coupling matrices. These coupling matrices depict the strength of evolutionary pressure at particular
amino acid type pairs (e.g. histidine?threonine) to be present at this position pair ? the higher the
value, the more pressure there is. These values are not directly interpretable, as they depend on the
environment the amino acids are in, their propensity to mutate and many other factors. So far, the best
approach to obtain scores corresponding to contact propensities was to compute the Frobenius norm
of individual coupling matrices rendering a contact matrix, which then has been subject to average
product correction [6]. Average product correction scales the value of contact propensity based on
the mean values for involved positions and a mean value for the entire contact matrix.
As there is insufficient data to conclusively infer all the parameters, and coupling inference is
inherently ill-posed, regularization is required [3, 5]. Here we used l2 regularization with ? = 0.01.
These approaches to reduce each 21 ? 21 coupling matrix to only one value discard valuable
information encoded in matrices, consequently leading to a reduction in expected predictive capability.
In this work we use the entire L ? L ? 21 ? 21 coupling data J in their unmodified form. The value
Ji,j,k,l quantifies the co-evolution of residue type k at location i with residue type l at location j. The
L ? L ? 21 ? 21 array J serves as the main input to the convolutional network to predict the L ? L
contact map C.
The following symmetries hold: Ci,j = Cj,i and Ji,j,k,l = Jj,i,l,k ?i, j, k, l.
2.3
Convolutional neural network for contact prediction
The goal of this work is to predict the contact Ci,j between residues i and j from the co-evolution
statistics Ji,j,k,l obtained from pseudolikelihood maximization [3]. Not only the local statistics
(Ji,j,k,l )k,l for fixed (i, j) but also the neighborhood around (i, j) is informative for contact determination. Particularly, contacts between different secondary structure elements are reflected both in the
spatial contact pattern, such as the ?checkerboard? pattern typical for helix?helix contacts, cf. Fig. 1b
(the ?i? and ?j? dimensions), as well as in the residue types (the ?k? and ?l? dimensions) at (i, j)
and in its neighborhood. Thus, a convolutional neural network [7] with convolutions over (i, j), i.e.
learning the transformation to be applied to all w ? w ? 21 ? 21 windows of (Ji,j,k,l ), is a highly
appropriate method for prediction of Ci,j .
The features in each ?pixel? (i, j) are the entries of the 21 ? 21 co-evolution statistics
(Ji,j,k,l )k,l?{1,...,21} between amino acid residues i and j. Fig. 1d shows the co-evolution statistics
of residues 6 and 133, i.e. (J6,133,k,l )k,l?{1,...,21} , of oxymyoglobin. These 21 ? 21 entries can be
vectorized to constitute the feature vector of length 441 at the respective ?pixel?.
3
The neural network input J and at its output C should have the same size along the convolution
dimensions ?i? and ?j?. In order to achieve this, the input boundaries are padded accordingly (i.e.
by the receptive window size) along these dimensions. In order to help the network distinguish the
padding values (e.g. zeros) from valid co-evolution values, the indicator function of the valid region
(1 in the valid L ? L region and 0 in the padded region) is introduced as an additional feature channel.
Our method is based on pseudolikelihood maximization [3] and convolutional networks, plmConv for
short.
2.4
Convolutional neural network for bipartite-graph-valued images
The fixed order of the 441 features can be considered acceptable since any input?output mapping
can in principle be learned, assuming we have sufficient training data (and an appropriate network
architecture). However, if the amount of training data is limited then a better-structured, more
compact representation might be of great advantage as opposed to requiring to see most of the
possible configurations of co-evolution. Such more compact representations can be obtained by
relaxing the knowledge of the identities of the amino acid residues, as described in the following.
The features at ?pixel? (i, j) correspond to the weights of a (complete) bipartite undirected edgeweighted graph K21,21 with 21 + 21 vertices, with the first disjoint set of 21 vertices representing the
21 amino acid types at position i, the second set representing the 21 amino acid types at position j, and
the edge weights representing co-evolution of the respective
variants.
Thus, B = (Ji,j,k,l )k,l?{1,...,21}
0 B
is the biadjacency matrix of this graph, i.e. A =
is its adjacency matrix. The edge
BT 0
weights (i.e. entries of B) are different at each ?pixel? (i, j).
There are different possibilities of passing these features (the entries of B) to a convolutional network.
We propose and evaluate the following possibilities to construct the feature vector at pixel (i, j):
1. Vectorize B, maintaining the order of the amino acid types;
2. Sort the vectorized matrix B;
3. Sort the rows of B by their row-wise norm, then vectorize;
4. Construct a histogram of the entries of B.
While the first method maintains the order of amino acid types, all others produce feature vectors that
are invariant to permutations of the amino acid types.
2.5
Generalization to arbitrary graphs
In other applications to graph-valued images with general (not necessarily bipartite) graphs, similar
transformations as above can be applied to the adjacency matrix A. An additional useful property
is the special role of the diagonal of A. Node weights can be included as additional features, and
accordingly reordered.
There has been work on neural networks which can process functions defined on graphs [8, 9, 10, 11].
In contrast to these approaches, in our case the input is defined on a regular grid, but the value of the
input at each location is a graph.
2.6
Data sets
The Critical Assessment of Techniques for Protein Structure Prediction (CASP) is a bi-annual
community-wide experiment in blind prediction of previously unknown protein structures. The
prediction targets vary in difficulty, with some having a structure of homologous proteins already
deposited in the Protein Data Bank (PDB), considered easy targets, some having no detectable
homologs in PDB (hard targets), and some having entirely new folds (free modelling targets). The
protein targets vary also in terms of available sequence homologs, which can range from only a few
sequences to hundreds of thousands.
We posit that the method we propose is robust and general. To illustrate its performance, we have
intentionally trained it on a limited set of proteins originating from CASP9 and CASP10 experiments
4
and tested it on CASP11 proteins. In so doing, we emulated the conditions of a real-life structure
prediction experiment.
The proteins from these experiments form a suitable data set for this analysis, as they (a) are varied
in terms of structure and ?difficulty?, (b) have previously unknown structures, which have been
subsequently made public, (c) are timestamped and (d) they have been subject to contact prediction
attempts by other groups whose results are publicly available. Therefore, training on CASP9 and
CASP10 data sets allowed us to avoid cross-contamination. We are reasonably confident that any
performance of the method originates from the method?s strengths and is not a result of overfitting.
The training has been conducted on a subset of 231 proteins from CASP9 and CASP10, while the
test set consisted of 89 proteins from CASP11 (all non-cancelled targets). Several proteins have been
excluded from the training set for technical reasons: lack of any detectable homologs, too many
homologs detected, or lack of structure known at the time of publishing of CASP sets. The problems
with the number of sequences can be alleviated by attempting different homology detection strategies,
which we did not do, as we wanted to keep the analysis homogeneous.
2.7
Neural network architecture
Deep learning has strong advantages over handcrafted processing pipelines and is setting new performance records and bringing new insights in the biomedical community [12, 13]. However, parts of the
community are adopting deep learning with certain hesitation, even in areas where it is essential for
scientific progress. One of the main objections is a belief that the craft of network architecture design
and the network internals cannot be scientifically comprehended and lack theoretical underpinnings.
This is a false belief. There are scientific results to the contrary, concerning the loss function [14] and
network internals [15].
In the present work, we design the network architecture based on our knowledge of which features
might be meaningful for the network to extract, and how.
The first layer learns 128 filters of size 1 ? 1. Thus, 441 input features are compressed to 128 learned
features. This compression enforces the grouping of similar amino acids by their properties. Examples
of important properties are hydrophobicity, polarity, and size. Some of the most relevant parts of
the input information ?cysteine (C) at position i has a strongly positive evolutionary coupling with
histidine (H) at position j? (cf. Fig. 1d) is that the amino acids co-evolving have certain hydrophilicity
properties; that both are polar; and that the one at position i is rather small and the one at position
j is rather large; etc. One layer is sufficient to perform such a transformation. Note that we do not
handcraft these features; the network learns feature extractors that are optimal in terms of the training
data. Besides, compressing the inputs in this optimal way also reduces the number of weights of the
subsequent layer, thus regularizing the model in a natural way, and reducing the run time and memory
requirements.
The second layer learns 64 filters of size 7 ? 7. This allows to see the context (and end) of the contact
between two secondary structure elements (e.g. a contact between two ?-strands). In other words,
this choice of the window size and number of filters is motivated by the fact that information such
as ?(i, j) is a contact between a ?-strand at i and a ?-strand at j, the arrangement is antiparallel, the
contact ends two residues after i (and before j)? can be captured from a 7 ? 7 window of the data,
and well encoded in about 64 filters.
The third and final layer learns one filter (returning the predicted contact map) with the window size
9 ? 9. Thus, the overall receptive window of the convolutional network is 15 ? 15, which provides the
required amount of context of the co-evolution data to predict the contacts. Particularly, the relative
position (including the angle) between two contacting ?-helices can be well captured at this window
size. At the same time, this deep architecture is different from having, say, a network with a single
15 ? 15 convolutional layer because a non-deep network would require seeing many possible 15 ? 15
configurations in a non-abstract manner, and would tend to generalize badly and overfit. In contrast,
abstraction to higher-level features is provided by preceding layers in our architecture.
We used mean squared error loss, dropout 0.2 after input layer, 0.5 after each hidden layer, one pixel
stride, no pooling. The network is trained in Lasagne (https://github.com/Lasagne) using the Adam
algorithm [16] with learning rate 0.0001 for 100 epochs.
5
3
Results
To assess the performance of protein contact prediction methods, we have used the contact likeliness
criterion for C? distances (cf. Introduction), but the qualitative results are not dependent on the
criterion chosen. We have evaluated predictions both in terms of Top 10 pairs that are predicted most
likely to be in contact. It is estimated that in a protein one can observe L to 3L contacts, where L
is the length of the amino acid chain. Thus we have also evaluated greater numbers of predicted
contacts. We have assessed the predictions with respect to the sequence separation. It is widely
accepted that it is more difficult to predict long-range contacts than the ones separated by few amino
acids in the sequence space. At the same time, it is the long-range contacts that are most useful for
restraining the protein structure prediction simulations [17]. Maintaining the order of amino acid
types (feature vector construction method #1) yielded the best results in our case, which we focus on
exclusively in the following.
(a) PPV for our approach vs. plmDCA20 and (b) PPV for discussed methods as a function of conMetaPSICOV
tact definition
Figure 2: Method performance. Panel (a): prediction accuracy of plmConv (Y-axis) vs plmDCA
and MetaPSICOV (X-axis, in red and yellow, respectively); lines: least square fit, circles: individual
comparisons. Panel (b): prediction accuracy, depending on contact definition. X-axis: C? distance
threshold for amino acid pair to be in contact.
plmConv yields more accurate predictions than plmDCA. We compared the predictive performance of the proposed plmConv method to plmDCA in terms of positive predictive value (PPV) at
different prediction counts and different sequence separations (see Table 1 and Fig. 2a). Regardless
of the chosen threshold, plmConv yields considerably higher accuracy. This effect is particularly
important in context of long-range contacts, which tend to be underpredicted by plmDCA and related
methods, but are readily recovered by plmConv. The notable improvement in predictive power is
important, given that both plmDCA and plmConv use exactly the same data and same inference
algorithm, but differ in the processing of the inferred co-evolution matrices. We posit that this may
have longstanding implications for evolutionary coupling analysis, some of which we discuss below.
plmConv is more accurate than MetaPSICOV, while remaining more flexible. We compared
our method to MetaPSICOV [18, 19], a method that performed best in the CASP11 experiment.
We observed that plmConv results in overall higher prediction accuracy than MetaPSICOV (see
Table 1 and Fig. 2a). This holds for all the criteria, except for the top-ranked short contacts.
MetaPSICOV performs slightly better at the top-ranked short-range contacts, but they are easier to
predict, and less useful for protein folding [17]. It is worth noting that MetaPSICOV achieves its
high prediction accuracy by combining multiple sources of co-evolution data (including methods
functionally identical to plmDCA) with predicted biophysical properties of a protein (e.g. secondary
structure) and a feed-forward neural network. In plmConv we are able to achieve higher performance,
by using (a) an arbitrary alignment and (b) a single co-evolution result, which potentially allows for
tuning the hyperparameters of (a) and (b) to answer relevant biological questions.
6
Separation
All
Short
Medium
Long
Method
MetaPSICOV
plmDCA
plmConv
MetaPSICOV
plmDCA
plmConv
MetaPSICOV
plmDCA
plmConv
MetaPSICOV
plmDCA
plmConv
Top 10
0.797
0.598
0.807
0.754
0.497
0.724
0.710
0.506
0.744
0.594
0.536
0.686
L/10
0.761
0.570
0.768
0.683
0.415
0.654
0.645
0.438
0.673
0.562
0.516
0.651
L/5
0.717
0.525
0.729
0.583
0.318
0.581
0.559
0.355
0.583
0.522
0.455
0.616
L/2
0.615
0.435
0.663
0.415
0.229
0.438
0.419
0.253
0.428
0.436
0.372
0.531
L
0.516
0.356
0.573
0.294
0.178
0.320
0.302
0.180
0.304
0.339
0.285
0.430
Table 1: Positive predictive value for all non-local (separation 6+ positions), short-range, mid-range
and long-range (6 ? 11, 12 ? 23 and 24+ positions) contacts. We demonstrate results for Top 10
contacts per protein, as well as customary thresholds of L/10, L/5, L/2 and L contacts per protein,
where L is the length of the amino acid chain.
Figure 3: Positive predictive value for described methods at L contacts considered as a function of the
information content of the alignment. Scatter plot: observed raw values. Line plot: rolling average
with window size 15.
plmConv pushes the boundaries of inference with few sequences. One of the major drawbacks
of statistical inference for evolutionary analysis is its dependence on availability of high amounts of
homologous sequences in multiple sequence alignments. Our method to a large extent alleviates this
problem. As illustrated in Fig. 3, plmConv outperforms plmDCA accross all the range. MetaPSICOV
appears to be slightly better at the low-count end of the spectrum, which we believe is due to the way
MetaPSICOV augments the prediction process with additional data ? a technique known to improve
the prediction, that we have expressly not used in this work.
plmConv predicts long-range contacts more accurately. As mentioned above, it is the long-range
contacts which are of most utility for protein structure prediction experiments. Table 1 demonstrates
that plmConv is highly suitable for predicting long range contacts, yielding better performance across
all the contact count thresholds.
T0784: a success story. One of the targets in CASP11 (target ID: T0784) was a DUF4425 family
protein (BACOVA_05332) from Bacteroides ovatus (PDB ID: 4qey). The number of identifiable
sequence homologs for this protein was relatively low, which resulted in uninterpretable contact map
obtained by plmDCA. The same co-evolution statistics used as input to plmConv yielded a contact
map which not only was devoid of the noise present in plmDCA?s contact map, but also uncovered
numerous long-range contacts that were not identifiable previously. The contact map produced by
plmConv for this target is also of much higher utility than the one returned by MetaPSICOV. Note in
Fig. 4c how MetaPSICOV prediction lacks nearly all the long-range contacts, which are present in
the plmConv prediction.
7
(a) Structure
(b) Contact maps predicted by our
method vs. plmDCA
(c) Contact maps predicted by our
method vs. MetaPSICOV
Figure 4: An example of one of CASP11 proteins (T0784), where plmConv is able to recover the
contact map, which other methods cannot. True contacts (ground truth) marked in gray. Predictions
of respective methods are marked in color, with true positives in green and false positives in red.
Predictions along the diagonal with separation of 5 amino acids or less have not been considered in
computing positive predictive value and have been marked in lighter colors in the plots.
4
Discussion and Conclusions
In this work we proposed an entirely new way to handle the outputs of the co-evolutionary analyses
of multiple sequence alignments of homologous proteins. We demonstrated that this method is
considerably superior to the current ways of handling the co-evolution data, able to extract more
information from them, and consequently greatly aid protein contact prediction based on these data.
Contact prediction with our method is more accurate and 2 to 3 times faster than with MetaPSICOV.
Relevance to the field. Until now, the utility of co-evolution-based contact prediction was limited
because most of the proteins that had sufficiently high amount of sequence homologs had also their
structures determined and available for comparative modelling. As plmConv is able to predict highaccuracy contact maps from as few as 100 sequences, it opens a whole new avenue of possibilities
for the field. While there are only a few protein families that have thousands of known homologs
but no known structure, there are hundreds which are potentially within the scope of this method.
We postulate that this method should allow for computational elucidation of more structures, be it
by means of pure computational simulation, or simulation guided by predicted contacts and sparse
experimental restraints.
plmConv allows for varying prediction parameters. One of the strengths of the proposed method
is that it is agnostic to the input data, in particular to the way input alignments are constructed and to
the inference parameters (regularization strength). Therefore, one could envision using alignments
of close homologs to elucidate the co-evolution of a variable region in the protein (e.g. variable
regions of antibodies, extracellular loops of G protein?coupled receptors etc.), or distant homologs
to yield structural insights into the overall fold of the protein. In the same way, one could vary
the regularization strength of the inference, with stronger regularization allowing for more precise
elucidation of the few couplings (and consequently contacts) that are most significant for protein
stability or structure from the evolutionary point of view. Conversely, it is possible to relax the
regularization strength and let the data speak for itself, which could potentially result in a better
picture of the overall contact map and give a holistic insight into the evolutionary constraints on the
structure of the protein in question.
The method we propose is directly applicable to a vast array of biological problems, being both
accurate and flexible. It can use arbitrary input data and prediction parameters, which allows the end
user to tailor it to answer pertinent biological questions. Most importantly, though, even if trained
on the heavily constrained data set, it is able to produce results exceeding in predictive capabilities
those of the state-of-the-art methods in protein contact prediction at a fraction of computational effort,
making it perfectly suitable for large-scale analyses. We expect that the performance of the method
will further improve when trained on a larger, more representative set of proteins.
8
Acknowledgments Grant support: Deutsche Telekom Foundation, ERC Consolidator Grant
?3DReloaded?, ERC Starting Grant ?VideoLearn?.
References
[1] Rasko Leinonen, Federico Garcia Diez, David Binns, Wolfgang Fleischmann, Rodrigo Lopez, and Rolf
Apweiler. UniProt archive. Bioinformatics, 20(17):3236?3237, 2004.
[2] In-Geol Choi and Sung-Hou Kim. Evolution of protein structural classes and protein sequence families.
Proceedings of the National Academy of Sciences of the United States of America, 103(38):14056?61,
2006.
[3] Magnus Ekeberg, Cecilia L?vkvist, Yueheng Lan, Martin Weigt, and Erik Aurell. Improved contact
prediction in proteins: Using pseudolikelihoods to infer Potts models. Physical Review E - Statistical,
Nonlinear, and Soft Matter Physics, 87(1):1?19, 2013.
[4] Faruck Morcos, Andrea Pagnani, Bryan Lunt, Arianna Bertolino, Debora S Marks, Chris Sander, Riccardo
Zecchina, Jos? N Onuchic, Terence Hwa, and Martin Weigt. Direct-coupling analysis of residue coevolution
captures native contacts across many protein families. Proceedings of the National Academy of Sciences of
the United States of America, 108(49):E1293?301, 2011.
[5] Christoph Feinauer, Marcin J. Skwark, Andrea Pagnani, and Erik Aurell. Improving contact prediction
along three dimensions. PLOS Computational Biology, 10(10):e1003847, 2014.
[6] S. D. Dunn, L. M. Wahl, and G. B. Gloor. Mutual information without the influence of phylogeny or
entropy dramatically improves residue contact prediction. Bioinformatics, 24(3):333?340, 2008.
[7] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten ZIP code recognition. Neural Computation, 1(4):541?551, 1989.
[8] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The
graph neural network model. IEEE Transactions on Neural Networks, 20(1):61?80, 2009.
[9] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected
networks on graphs. In International Conference on Learning Representations, 2014.
[10] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data.
arXiv:1506.05163, 2015.
[11] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan AspuruGuzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints.
Advances in Neural Information Processing Systems 28, pages 2215?2223, 2015.
[12] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: convolutional networks for medical image
segmentation. In Medical Image Computing and Computer-Assisted Intervention, pages 234?241, 2015.
[13] Vladimir Golkov, Alexey Dosovitskiy, Jonathan Sperl, Marion Menzel, Michael Czisch, Philipp Samann,
Thomas Brox, and Daniel Cremers. q-Space deep learning: twelve-fold shorter and model-free diffusion
MRI scans. IEEE Transactions on Medical Imaging, 35(5):1344?1351, 2016.
[14] Anna Choromanska, Mikael Henaff, Michael Mathieu, G?rard Ben Arous, and Yann LeCun. The loss
surfaces of multilayer networks. Journal of Machine Learning Research: Workshop and Conference
Proceedings, 38:192?204, 2015.
[15] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In
IEEE Conference on Computer Vision and Pattern Recognition, 2016.
[16] Diederik P. Kingma and Jimmy Lei Ba. Adam: a method for stochastic optimization. In International
Conference on Learning Representations, 2015.
[17] M Michael Gromiha and Samuel Selvaraj. Importance of long-range interactions in protein folding.
Biophysical Chemistry, 77(1):49?68, 1999.
[18] David T. Jones, Tanya Singh, Tomasz Kosciolek, and Stuart Tetchner. MetaPSICOV: Combining coevolution methods for accurate prediction of contacts and long range hydrogen bonding in proteins.
Bioinformatics, 31(7):999?1006, 2015.
[19] Tomasz Kosciolek and David T. Jones. Accurate contact predictions using covariation techniques and
machine learning. Proteins: Structure, Function and Bioinformatics, 84(Suppl 1):145?151, 2016.
9
| 6488 |@word mri:1 version:1 compression:1 norm:2 stronger:1 open:1 simulation:3 covariance:2 pressure:6 arous:1 reduction:1 configuration:2 contains:1 series:2 score:1 exclusively:1 daniel:2 uncovered:1 united:2 envision:1 outperforms:2 reaction:1 existing:1 recovered:1 com:1 current:1 videolearn:1 scatter:1 diederik:1 deposited:1 readily:1 hou:1 distant:2 numerical:1 informative:1 edgeweighted:1 subsequent:1 pertinent:1 wanted:1 remove:1 treating:1 interpretable:1 depict:1 plot:3 v:4 accordingly:2 scaffold:1 tertiary:4 short:5 record:1 provides:1 node:1 location:3 philipp:2 org:1 casp:2 along:4 constructed:2 direct:4 qualitative:1 consists:2 lopez:1 manner:1 pairwise:1 inter:1 notably:1 expected:1 andrea:2 themselves:1 mechanic:1 compensating:1 pagnani:2 actual:1 window:8 accross:1 tandem:1 becomes:1 spain:1 provided:1 unrelated:1 deutsche:1 panel:2 medium:1 agnostic:1 transformation:3 sung:1 zecchina:1 every:1 zaremba:1 exactly:1 returning:1 demonstrates:1 originates:1 grant:3 szlam:1 medical:3 intervention:1 positive:7 before:1 local:2 despite:2 encoding:1 receptor:1 id:2 approximately:1 might:2 alexey:3 lasagne:2 conversely:1 relaxing:1 christoph:1 co:22 limited:3 bi:1 range:16 internals:2 responsible:1 enforces:1 acknowledgment:1 lecun:4 tsoi:1 backpropagation:1 dunn:1 area:2 evolving:1 adapting:1 dictate:1 alleviated:1 word:1 ronneberger:1 refers:1 regular:1 pdb:3 protein:77 seeing:1 cannot:2 close:2 risk:1 context:3 influence:1 map:18 demonstrated:1 regardless:1 starting:1 jimmy:1 pure:1 insight:3 array:3 importantly:1 likeliness:3 stability:1 handle:1 analogous:1 elucidate:2 today:1 target:9 construction:1 speak:1 programming:1 homogeneous:1 lighter:1 user:1 heavily:1 element:2 cysteine:1 recognition:2 particularly:3 native:2 database:1 ising:2 observed:4 role:1 predicts:1 capture:1 calculate:1 thousand:2 region:5 compressing:1 connected:1 plo:1 contamination:1 valuable:1 mentioned:1 environment:2 insertion:1 dynamic:1 trained:4 depend:1 solving:1 singh:1 predictive:8 reordered:1 bipartite:4 indirect:1 various:1 represented:2 america:2 separated:1 detected:1 neighborhood:2 quite:1 encoded:3 whose:2 valued:4 solve:1 widely:2 posed:1 say:1 compressed:1 relax:1 federico:1 statistic:8 fischer:1 tested:1 mutates:1 casp9:3 itself:1 final:1 hagenbuchner:1 sequence:33 advantage:2 biophysical:2 apweiler:1 net:1 propose:4 interaction:4 product:2 nashville:1 aligned:1 relevant:2 combining:2 loop:1 holistic:1 alleviates:1 achieve:2 academy:2 frobenius:1 mutate:1 olaf:1 vanderbilt:2 requirement:1 produce:2 comparative:1 adam:3 ben:1 help:1 coupling:13 illustrate:1 depending:1 pose:2 conformation:1 progress:1 strong:1 c:2 predicted:7 differ:1 posit:2 guided:1 drawback:1 filter:5 subsequently:2 stochastic:1 public:1 adjacency:2 require:1 generalization:2 polymer:1 biological:4 ryan:1 insert:1 pl:1 correction:2 hold:3 marco:1 sufficiently:1 considered:4 duvenaud:1 magnus:1 around:1 great:1 ground:1 mapping:1 predict:7 maclaurin:1 scope:1 major:1 vary:3 adopt:2 achieves:1 released:1 earth:1 purpose:1 polar:1 applicable:1 jackel:1 propensity:3 hubbard:1 successfully:1 weighted:1 modified:1 rather:2 avoid:1 varying:2 encode:1 derived:1 focus:2 improvement:1 potts:5 modelling:2 greatly:1 contrast:2 kim:1 inference:6 dependent:2 abstraction:1 foreign:1 entire:3 bt:1 spurious:2 hidden:2 ancestor:1 originating:2 marcin:3 choromanska:1 germany:3 pixel:7 overall:4 ill:1 flexible:2 art:2 spatial:1 special:1 brox:4 constrained:1 equal:1 construct:2 field:2 having:4 mutual:1 atom:2 identical:1 biology:1 stuart:1 jones:2 nearly:1 assisted:1 others:1 dosovitskiy:2 few:7 composed:1 resulted:1 national:2 individual:2 scarselli:1 consisting:1 maintain:1 attempt:1 detection:1 restraint:1 interest:1 dougal:1 highly:2 possibility:3 alignment:15 henderson:1 yielding:1 chain:2 amenable:1 accurate:6 implication:1 edge:3 underpinnings:1 arthur:1 respective:3 shorter:1 circle:1 theoretical:2 column:2 soft:1 unmodified:1 maximization:3 vertex:2 entry:6 subset:1 rolling:1 predictor:1 hundred:2 recognizing:1 marion:1 rounding:1 conducted:1 menzel:1 too:3 answer:2 considerably:2 confident:1 st:1 devoid:1 international:2 twelve:1 ancestral:1 physic:1 jos:1 terence:1 together:1 michael:3 squared:1 central:1 postulate:1 opposed:1 chung:1 leading:2 wojciech:1 checkerboard:2 account:1 de:3 stride:1 gabriele:1 student:1 summarized:1 b2:1 availability:1 matter:1 chemistry:1 cremers:2 notable:1 bombarell:1 blind:1 performed:1 handcraft:1 view:1 wolfgang:1 doing:2 red:2 hirzel:1 sort:2 maintains:1 capability:2 recover:1 tomasz:2 mutation:2 ass:1 square:1 publicly:1 accuracy:5 convolutional:16 acid:38 conducting:1 largely:1 hwa:1 correspond:2 yield:3 yellow:1 generalize:1 weak:2 handwritten:1 biophysically:1 mutated:1 modelled:1 raw:1 emulated:2 accurately:1 antiparallel:1 produced:1 worth:1 j6:1 weigt:2 biomedicine:1 ah:1 definition:2 against:1 involved:3 intentionally:1 resultant:1 unsolved:1 covariation:1 color:3 knowledge:3 improves:1 cj:1 segmentation:1 appears:1 feed:1 tum:2 higher:6 reflected:2 improved:1 rard:1 evaluated:2 though:1 strongly:2 just:1 biomedical:1 correlation:3 overfit:1 until:1 iparraguirre:1 nonlinear:1 assessment:1 lack:4 gray:1 scientific:2 believe:1 lei:1 name:1 effect:1 homologs:11 requiring:1 true:2 building:1 homology:2 usa:1 evolution:21 analytically:1 regularization:6 chemical:1 spatially:1 excluded:1 illustrated:1 adjacent:1 during:1 transitivity:1 scientifically:1 samuel:1 criterion:3 prominent:1 freiburg:2 complete:1 demonstrate:1 tn:1 performs:1 image:7 wise:1 recently:1 common:1 superior:1 physical:3 ji:7 handcrafted:1 million:1 discussed:1 organism:1 functionally:1 significant:1 tuning:1 grid:1 inclusion:2 particle:1 erc:2 had:2 fingerprint:1 bruna:2 surface:1 etc:2 enzyme:1 contaminating:1 restraining:1 dictated:1 henaff:2 discard:1 certain:2 success:1 jorge:1 life:2 jens:2 accomplished:1 conserved:1 captured:2 additional:5 greater:1 preceding:1 zip:1 living:1 signal:1 multiple:9 timestamped:1 infer:2 reduces:1 alan:1 technical:2 faster:1 determination:1 cross:1 long:12 concerning:1 molecular:1 prediction:39 variant:3 multilayer:1 vision:1 exerts:1 arxiv:1 histogram:1 adopting:1 suppl:1 cell:2 folding:2 residue:16 tact:1 objection:1 source:1 crucial:1 bringing:1 archive:2 subject:2 tend:2 pooling:1 undirected:2 contrary:2 structural:2 golkov:3 presence:1 intermediate:2 noting:1 easy:1 sander:1 automated:1 rendering:1 fit:1 architecture:6 identified:1 perfectly:1 coevolution:2 reduce:1 avenue:1 motivated:1 utility:3 padding:1 energetically:1 effort:1 returned:1 passing:1 cause:1 constitute:2 jj:1 deep:6 dramatically:1 useful:3 detailed:1 amount:4 mid:1 locally:2 augments:1 http:2 exist:1 stabilized:2 estimated:2 disjoint:1 per:2 bryan:1 diverse:1 group:2 threshold:5 lan:1 diffusion:1 vast:1 graph:15 imaging:1 padded:2 fraction:2 run:2 package:1 letter:1 inverse:2 angle:1 tailor:1 family:5 yann:3 separation:5 antibody:1 acceptable:1 uninterpretable:1 entirely:2 layer:9 dropout:1 summer:1 distinguish:1 fold:7 encountered:1 annual:1 badly:1 yielded:2 strength:6 occur:1 identifiable:2 constraint:1 markus:1 franco:1 attempting:1 ekeberg:1 relatively:1 extracellular:1 martin:2 structured:2 munich:1 poor:1 across:2 slightly:2 character:1 making:2 invariant:1 qey:1 pipeline:1 remains:1 previously:3 turn:1 detectable:2 count:3 discus:1 serf:1 end:4 adopted:1 available:3 denker:1 observe:1 appropriate:2 spectral:1 cancelled:1 customary:1 expressly:1 thomas:4 top:5 remaining:2 cf:3 gori:1 publishing:1 maintaining:2 elucidation:2 tanya:1 mikael:2 contact:86 question:4 already:1 arrangement:1 contacting:1 receptive:2 primary:1 strategy:1 dependence:1 diagonal:2 consolidator:1 evolutionary:13 distance:3 parametrized:1 chris:1 hesitation:1 partner:1 vectorize:2 extent:1 reason:1 assuming:1 erik:2 length:4 besides:2 code:1 polarity:1 insufficient:1 vladimir:2 riccardo:1 difficult:2 potentially:4 ba:1 design:2 unknown:2 perform:3 allowing:2 upper:1 convolution:3 observation:1 markov:1 howard:1 precise:1 interacting:1 varied:1 arbitrary:3 community:3 inferred:2 introduced:1 david:4 pair:9 required:2 inverting:1 bonding:1 learned:2 deletion:1 boser:1 barcelona:1 kingma:1 nip:1 able:5 below:1 pattern:4 regime:1 monfardini:1 rolf:1 including:2 memory:1 green:1 belief:2 power:1 critical:1 suitable:3 difficulty:2 natural:1 homologous:5 predicting:3 restore:1 indicator:1 ppv:3 ranked:2 representing:3 improve:2 github:1 numerous:1 picture:1 mathieu:1 axis:3 mediated:1 extract:2 coupled:1 larger:1 joan:2 epoch:1 review:1 l2:1 removal:1 relative:1 fully:1 loss:3 permutation:1 expect:1 aurell:2 hydrophobicity:1 foundation:1 vectorized:2 sufficient:2 principle:1 bank:2 helix:7 story:1 share:2 heavy:1 row:2 course:1 free:2 pseudolikelihood:3 allow:1 wide:1 rodrigo:1 sparse:2 boundary:2 dimension:5 valid:3 forward:1 made:1 longstanding:1 oftentimes:1 far:1 constituting:1 transaction:2 approximate:1 compact:3 uni:2 rafael:1 conclusively:1 gene:1 keep:1 global:1 overfitting:1 histidine:2 spectrum:1 search:1 hydrogen:1 quantifies:1 table:4 channel:2 nature:1 robust:1 reasonably:1 inherently:1 morcos:1 symmetry:1 improving:1 complex:1 necessarily:2 did:2 anna:1 main:2 whole:1 noise:1 hyperparameters:1 allowed:1 evolutionarily:2 amino:38 body:1 fig:8 referred:1 representative:1 depicts:1 aid:1 consisted:1 position:16 inferring:3 exceeding:1 breaking:1 third:1 extractor:1 learns:4 abundance:1 choi:1 sperl:1 k21:1 favourable:3 lunt:1 grouping:1 essential:1 workshop:1 false:2 importance:1 ci:4 push:1 margin:1 gap:2 easier:1 entropy:1 garcia:1 timothy:1 likely:1 pseudolikelihoods:1 visual:1 strand:4 contained:1 partially:1 truth:1 goal:1 identity:1 marked:3 consequently:3 absence:1 content:1 considerable:1 experimentally:2 feasible:2 change:1 determined:2 except:2 typical:1 included:1 acting:1 hard:1 reducing:1 called:3 secondary:4 accepted:1 experimental:1 craft:1 meaningful:1 dosovits:1 phylogeny:1 support:1 mark:1 scan:1 assessed:1 jonathan:1 relevance:1 bioinformatics:4 evaluate:1 regularizing:1 handling:1 |
6,067 | 6,489 | Single-Image Depth Perception in the Wild
Weifeng Chen
Zhao Fu
Dawei Yang
Jia Deng
University of Michigan, Ann Arbor
{wfchen,zhaofu,ydawei,jiadeng}@umich.edu
Abstract
This paper studies single-image depth perception in the wild, i.e., recovering depth
from a single image taken in unconstrained settings. We introduce a new dataset
?Depth in the Wild? consisting of images in the wild annotated with relative depth
between pairs of random points. We also propose a new algorithm that learns to
estimate metric depth using annotations of relative depth. Compared to the state
of the art, our algorithm is simpler and performs better. Experiments show that
our algorithm, combined with existing RGB-D data and our new relative depth
annotations, significantly improves single-image depth perception in the wild.
Relative Depth
Annotations
RGB-D Data
train
Input Image
Deep Network with
Pixel-wise Prediction
Metric Depth
Figure 1: We crowdsource annotations of relative depth and train a deep network to recover depth
from a single image taken in unconstrained settings (?in the wild?).
1
Introduction
Depth from a single RGB image is a fundamental problem in vision. Recent years have seen rapid
progress thanks to data-driven methods [1, 2, 3], in particular, deep neural networks trained on large
RGB-D datasets [4, 5, 6, 7, 8, 9, 10]. But such advances have yet to broadly impact higher-level tasks.
One reason is that many higher-level tasks must operate on images ?in the wild??images taken with
no constraints on cameras, locations, scenes, and objects?but the RGB-D datasets used to train and
evaluate image-to-depth systems are constrained in one way or another.
Current RGB-D datasets were collected by depth sensors [4, 5], which are limited in range and
resolution, and often fail on specular or transparent objects [11]. In addition, because there is no
Flickr for RGB-D images, researchers have to manually capture the images. As a result, current
RGB-D datasets are limited in the diversity of scenes. For example, NYU depth [4] consists mostly of
indoor scenes with no human presence; KITTI [5] consists mostly of road scenes captured from a car;
Make3D [3, 12] consists mostly of outdoor scenes of the Stanford campus (Figure. 2). While these
datasets are pivotal in driving research, it is unclear whether systems trained on them can generalize
to images in the wild.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Is it possible to collect ground-truth depth for images in the wild? Using depth sensors in unconstrained settings is not yet feasible. Crowdsourcing seems viable, but humans are not good at
estimating metric depth, or 3D metric structure in general [13]. In fact, metric depth from a single
image is fundamentally ambiguous: a tree behind a house can be slightly bigger but further away,
or slightly smaller but closer?the absolute depth difference between the house and the tree cannot
be uniquely determined. Furthermore, even in cases where humans can estimate metric depth, it is
unclear how to elicit the values from them.
But humans are better at judging relative depth [13]: ?Is point A closer than point B?? is often a much
easier question for humans. Recent work by Zoran et al. [14] shows that it is possible to learn to
estimate metric depth using only annotations of relative depth. Although such metric depth estimates
are only accurate up to monotonic transformations, they may well be sufficiently useful for high-level
tasks, especially for occlusion reasoning. The seminal results by Zoran et al. point to two fronts for
further progress: (1) collecting a large amount of relative depth annotations for images in the wild
and (2) improving the algorithms that learn from annotations of relative depth.
In this paper, we make contributions on both fronts. Our first contribution is a new dataset called
?Depth in the Wild? (DIW). It consists of 495K diverse images, each annotated with randomly sampled
points and their relative depth. We sample one pair of points per image to minimize the redundancy of
annotation 1 . To the best of our knowledge this is the first large-scale dataset consisting of images in
the wild with relative depth annotations. We demonstrate that this dataset can be used as an evaluation
benchmark as well as a training resource 2 .
Our second contribution is a new algorithm for learning to estimate metric depth using only annotations of relative depth. Our algorithm not only significantly outperforms that of Zoran et al. [14],
but is also simpler. The algorithm of Zoran et al. [14] first learns a classifier to predict the ordinal
relation between two points in an image. Given a new image, this classifier is repeatedly applied
to predict the ordinal relations between a sparse set of point pairs (mostly between the centers of
neighboring superpixels). The algorithm then reconstructs depth from the predicted ordinal relations
by solving a constrained quadratic optimization that enforces additional smoothness constraints and
reconciles potentially inconsistent ordinal relations. Finally, the algorithm estimates depth for all
pixels assuming a constant depth within each superpixel.
In contrast, our algorithm consists of a single deep network that directly predicts pixel-wise depth
(Fig. 1). The network takes an entire image as input, consists of off-the-shelf components, and
can be trained entirely with annotations of relative depth. The novelty of our approach lies in the
combination of two ingredients: (1) a multi-scale deep network that produces pixel-wise prediction
of metric depth and (2) a loss function using relative depth. Experiments show that our method
produces pixel-wise depth that is more accurately ordered, outperforming not only the method by
Zoran et al. [14] but also the state-of-the-art image-to-depth system by Eigen et al. [8] trained with
ground-truth metric depth. Furthermore, combing our new algorithm, our new dataset, and existing
RGB-D data significantly improves single-image depth estimation in the wild.
2
Related work
RGB-D Datasets: Prior work on constructing RGB-D datasets has relied on either Kinect [15, 4, 16,
17] or LIDAR [5, 3]. Existing Kinect-based datasets are limited to indoor scenes; existing LIDARbased datasets are biased towards scenes of man-made structures [5, 3]. In contrast, our dataset covers
a much wider variety of scenes; it can be easily expanded with large-scale crowdsourcing and the
virually umlimited Internet images.
Intrinsic Images in the Wild: Our work draws inspiration from Intrinsic Images in the Wild [18], a
seminal work that crowdsources annotations of relative reflectance on unconstrained images. Our
work differs in goals as well as in several design decisions. First, we sample random points instead of
centers of superpixels, because unlike reflectance, it is unreasonable to assume a constant depth within
a superpixel. Second, we sample only one pair of points per image instead of many to maximize the
value of human annotations.
Depth from a Single Image: Image-to-depth is a long-standing problem with a large body of
literature [19, 20, 12, 1, 6, 7, 8, 9, 10, 19, 21, 22, 23, 24, 25, 26]. The recent convergence of deep
1
2
A small percentage of images have duplicates and thus have multiple pairs.
Project website: http://www-personal.umich.edu/~wfchen/depth-in-the-wild.
2
NYU V2 Dataset
KITTI Dataset
Make3D Dataset
Our Dataset
Figure 2: Example images from current RGB-D datasets and our Depth in the Wild (DIW) dataset.
Figure 3: Annotation UI. The user presses Figure 4: Relative image location (normalized to
?1? or ?2? to pick the closer point.
[-1,1]) and relative depth of two random points.
neural networks and RGB-D datasets [4, 5] has led to major advances [27, 6, 28, 8, 10, 14]. But
the networks in these previous works, with the exception of [14], were trained exclusively using
ground-truth metric depth, whereas our approach uses relative depth.
Our work is inspired by that of Zoran et al. [14], which proposes to use a deep network to repeatedly
classify pairs of points sampled based on superpixel segmentation, and to reconstruct per-pixel metric
depth by solving an additional optimization problem. Our approach is different: it consists of a single
deep network trained end-to-end that directly predicts per-pixel metric depth; there is no intermediate
classification of ordinal relations and as a result no optimization needed to resolve inconsistencies.
Learning with Ordinal Relations: Several recent works [29, 30] have used the ordinal relations
from the Intrinsic Images in the Wild dataset [18] to estimate surface refletance. Similar to Zoran et
al. [14], Zhou et al. [29] first learn a deep network to classify the ordinal relations between pairs of
points and then make them globally consistent through energy minimization.
Narihira et al. [30] learn a ?lightness potential? network that takes an image patch and predicts the
metric reflectance of the center pixel. But this network is applied to only a sparse set of pixels.
Although in principle this lightness potential network can be applied to every pixel to produce
pixel-wise reflectance, doing so would be quite expensive. Making it fully convolutional (as the
authors mentioned in [30]) only solves it partially: as long as the lightness potential network has
downsampling layers, which is the case in [30], the final output will be downsampled accordingly.
Additional resolution augmentation (such as the ?shift and stitch? approach [31]) is thus needed. In
contrast, our approach completely avoids such issues and directly outputs pixel-wise estimates.
Beyond intrinsic images, ordinal relations have been used widely in computer vision and machine
learning, including object recognition [32] and learning to rank [33, 34].
3
Dataset construction
We gather images from Flickr. We use random query keywords sampled from an English dictionary
and exclude artificial images such as drawings and clip arts. To collect annotations of relative depth,
we present a crowd worker an image and two highlighted points (Fig. 3), and ask ?which point is
closer, point 1, point 2, or hard to tell?? The worker presses a key to respond.
How Many Pairs? How many pairs of points should we query per image? We sample just one per
image because this maximizes the amount of information from human annotators. Consider the other
extreme?querying all possible pairs of points in the same image. This is wasteful because pairs of
points in close proximity are likely to have the same relative depth. In other words, querying one
3
unconstrained pairs
symmetric pairs
hard-to-tell pairs
Figure 5: Example images and annotations. Green points are those annotated as closer in depth.
more pair from the same image may add less information than querying one more pair from a new
image. Thus querying only one pair per image is more cost-effective.
Which Pairs? Which two points should we query given an image? The simplest way would be to
sample two random points from the 2D plane. But this results in a severe bias that can be easily
exploited: if an algorithm simply classifies the lower point in the image to be closer in depth, it will
agree with humans 85.8% of the time (Fig. 4). Although this bias is natural, it makes the dataset less
useful as a benchmark.
An alternative is to sample two points uniformly from a random horizontal line, which makes it
impossible to use the y image coordinate as a cue. But we find yet another bias: if an algorithm simply
classifies the point closer to the center of the image to be closer in depth, it will agree with humans
71.4% of the time. This leads to a third approach: uniformly sample two symmetric points with
respect to the center from a random horizontal line (the middle column of Fig. 5). With the symmetry
enforced, we are not able to find a simple yet effective rule based purely on image coordinates: the
left point is almost equally likely (50.03%) to be closer than the right one.
Our final dataset consists of a roughly 50-50 combination of unconstrained pairs and symmetric pairs,
which strikes a balance between the need for representing natural scene statistics and the need for
performance differentiation.
Protocol and Results: We crowdsource the annotations using Amazon Mechanical Turk (AMT).
To remove spammers, we insert into all tasks gold-standard images verified by ourselves, and reject
workers whose accumulative accuracy on the gold-standard images is below 85%. We assign each
query (an image and a point pair) to two workers, and add the query to our dataset if both workers
can tell the relative depth and agree with each other; otherwise the query is discarded. Under this
protocol, the chance of adding a wrong answer to our dataset is less than 1% as measured on the
gold-standard images.
We processed 1.24M images on AMT and obtained 0.5M valid answers (both workers can tell the
relative depth and agree with each other). Among the valid answers, 261K are for unconstrained pairs
and 240K are for symmetric pairs. For unconstrained pairs, It takes a median of 3.4 seconds for a
worker to decide, and two workers agree on the relative depth 52% of the time; for symmetric pairs,
the numbers are 3.8s and 32%. These numbers suggest that the symmetric pairs are indeed harder.
Fig. 5 presents examples of different kinds of queries.
4
Learning with relative depth
How do we learn to predict metric depth given only annotations of relative depth? Zoran et al. [14]
first learn a classifier to predict ordinal relations between centers of superpixels, and then reconcile
the relations to recover depth using energy minimization, and then interpolate within each superpixel
to produce per-pixel depth.
We take a simpler approach. The idea is that any image-to-depth algorithm would have to compute a
function that maps an image to pixel-wise depth. Why not represent this function as a neural network
and learn it from end to end? We just need two ingredients: (1) a network design that outputs the
same resolution as the input, and (2) a way to train the network with annotations of relative depth.
Network Design: Networks that output the same resolution as the input are aplenty, including
the recent designs for depth estimation [8, 35] and those for semantic segmentation [36] and edge
detection [37]. A common element is processing and passing information across multiple scales.
In this work, we use a variant of the recently introduced ?hourglass? network (Fig. 6), which has
been used to achieve state-of-the-art results on human pose estimation [38]. It consists of a series
4
A
D
G
B
E
H
C
F
upsample
pool
upsample
pool
pool
pool
upsample
upsample
Figure 6: Network design. Each block represents a layer. Blocks sharing the same color are identical.
The ? sign denotes the element-wise addition. Block H is a convolution with 3x3 filter. All other
blocks denote the Inception module shown in Figure 7. Their parameters are detailed in Tab. 1
Table 1: Parameters for each type of layer in our network.
Conv1 to Conv4 are sizes of the filters used in the components
of Inception module shown in Figure.7. Conv2 to 4 share the
same number of input and is specified in Inter Dim.
Filter concatenation
Conv2
Conv3
Conv4
1x1 Conv
1x1 Conv
1x1 Conv
Conv1
Previous layer
Figure 7: Variant of Inception
Module [39] used by us.
Block Id
A
B
C
D
E
F
G
#In/#Out
Inter Dim
Conv1
Conv2
Conv3
Conv4
128/64
64
1x1
3x3
7x7
11x11
128/128
32
1x1
3x3
5x5
7x7
128/128
64
1x1
3x3
7x7
11x11
128/256
32
1x1
3x3
5x5
7x7
256/256
32
1x1
3x3
5x5
7x7
256/256
64
1x1
3x3
7x7
11x11
256/128
32
1x1
3x3
5x5
7x7
of convolutions (using a variant of the inception [39] module) and downsampling, followed by a
series of convolutions and upsampling, interleaved with skip connections that add back features from
high resolutions. The symmetric shape of the network resembles a ?hourglass?, hence the name. We
refer the reader to [38] for comparing the design to related work. For our purpose, this particular
choice is not essential, as the various designs mainly differ in how information from different scales
is dispersed and aggregated, and it is possible that all of them can work equally well for our task.
Loss Function: How do we train the network using only ordinal annotations? All we need is a loss
function that encourages the predicted depth map to agree with the ground-truth ordinal relations.
Specifically, consider a training image I and its K queries R = {(ik , jk , rk )}, k = 1, . . . , K, where
ik is the location of the first point in the k-th query, jk is the location of the second point in the
k-th query, and rk ? {+1, ?1, 0} is the ground-truth depth relation between ik and jk : closer (+1),
further (?1), and equal (0). Let z be the predicted depth map and zik , zjk be the depths at point ik
and jk . We define a loss function
L(I, R, z) =
K
X
?k (I, ik , jk , r, z),
(1)
k=1
where ?k (I, ik , jk , z) is the loss for the k-th query
?
? log (1 + exp(?zik + zjk )) ,
log (1 + exp(zi ? zjk )) ,
?k (I, ik , jk , z) =
? (z ? z )2 , k
ik
jk
rk = +1
rk = ?1
rk = 0.
(2)
This is essentially a ranking loss: it encourages a small difference between depths if the ground-truth
relation is equality; otherwise it encourages a large difference.
Novelty of Our Approach: Our novelty lies in the combination of a deep network that does pixelwise prediction and a ranking loss placed on the pixel-wise prediction. A deep network that does
pixel-wise prediction is not new, nor is a ranking loss. But to the best of our knowledge, such a
combination has not been proposed before, and in particular not for estimating depth.
5
Experiments on NYU Depth
We evaluate our method using NYU Depth [4], which consists of indoor scenes with ground-truth
Kinect depth. We use the same setup as that of Zoran et al. [14]: point pairs are sampled from the
5
Input image
Our Depth
Zoran
Eigen
Ground Truth
Figure 8: Qualitative results on NYU Depth by our method, the method of Eigen et al. [8], and the
method of Zoran et al. [14]. All depth maps except ours are directly from [14]. More results are in
the supplementary material.
training images (the subset of NYU Depth consisting of 795 images with semantic labels) using
superpixel segmentation and their ground-truth ordinal relations are generated by comparing the
ground-truth Kinect depth; the same procedure is applied to the test set to generate the point pairs for
evaluation (around 3K pairs per image). We use the same training and test data as Zoran et al. [14].
Table 2: Left table: ordinal error measures (disagreement rate with ground-truth depth ordering) on
NYU Depth. Right able: metric error measures on NYU Depth. Details for each metric can be found
in [8]. There are two versions of results by Eigen et al. [8], one using AlexNet (Eigen(A)) and one
using VGGNet (Eigen(V)). Lower is better for all error measures.
Method
WKDR
WKDR=
WKDR6=
Ours
Zoran [14]
35.6%
43.5%
36.1%
44.2%
36.5%
41.4%
rand_12K
rand_6K
rand_3K
34.9%
36.1%
35.8%
32.4%
32.2%
28.7%
37.6%
39.9%
41.3%
Ours_Full
Eigen(A) [8]
Eigen(V) [8]
28.3%
37.5%
34.0%
30.6%
46.9%
43.3%
28.6%
32.7%
29.6%
Method
Ours
Ours_Full
Zoran [14]
Eigen(A) [8]
Eigen(V) [8]
Wang [28]
Liu [6]
Li [10]
Karsch [1]
Baig [40]
RMSE
RMSE
(log)
RMSE a
(s.inv)
absrel
sqrrel
1.13
1.10
1.20
0.75
0.64
0.75
0.82
0.82
1.20
1.0
0.39
0.38
0.42
0.26
0.21
-
0.26
0.24
0.20
0.17
-
0.36
0.34
0.40
0.21
0.16
0.22
0.23
0.23
0.35
0.3
0.46
0.42
0.54
0.19
0.12
-
As the system by Zoran et al. [14], our network predicts one of the three ordinal relations on the
test pairs: equal (=), closer (<), or farther (>). We report WKDR, the weighted disagreement rate
between the predicted ordinal relations and ground-truth ordinal relations 3 . We also report WKDR=
(disagreement rate on pairs whose ground-truth relations are =) and WKDR6= (disagreement rate on
pairs whose ground-truth relations are < or >).
Since two ground-truth depths are almost never exactly the same, there needs to be a relaxed definition
of equality. Zoran et al. [14] define two points to have equal depths if the ratio between their groundtruth depths is within a pre-determined range. Our network predicts an equality relation if the depth
difference is smaller than a threshold ? . The choice of this threshold will result in different values for
the error metrics (WKDR, WKDR= , WKDR6= ): if ? is too small, most pairs will be predicted to be
unequal and the error metric on equality relations (WKDR= ) will be large; if ? is too big, most pairs
will be predicted to be equal and the error metric on inequality relations (WKDR6= ) will be large. We
choose the threshold ? that minimizes the maximum of the three error metrics on a validation set
held out from the training set. Tab. 2 compares our network (ours) versus that of Zoran et al. [14].
Our network is trained with the same data 4 but outperforms [14] on all three metrics.
Following [14], we also compare with the state-of-art image-to-depth system by Eigen et al. [8],
which is trained on pixel-wise ground-truth metric depth from the full NYU Depth training set
(220K images). To compare fairly, we give our network access to the full NYU Depth training set.
In addition, we remove the limit of 800 point pairs per training image placed by Zoran et al and
use all available pairs. The results in Tab. 2 show that our network (ours_full) achieves superior
performance in estimating depth ordering. Granted, this comparison is not entirely fair because [8] is
not optimized for predicting ordinal relations. But this comparison is still significant in that it shows
a
Computed using our own implementation based on the definition given in [35].
WKDR stands for ?Weighted Kinect Disagreement Rate?; the weight is set to 1 as in [14]
4
The code released by Zoran et al. [14] indicates that they train with a random subset of 800 pairs per image
instead of all the pairs. We follow the same procedure and only use a random subset of 800 pairs per image.
3
6
Figure 9: Point pairs generated through superpixel segmentation [14] (left) versus point pairs
generated through random sampling with distance constraints (right).
that we can train on only relative depth and rival the state-of-the-art system in estimating depth up to
monotonic transformations.
In Figure. 8 we show qualitative results on the same example images used by Zoran et al. [14]. We
see that although imperfect, the recovered metric depth by our method is overall reasonable and
qualitatively similar to that by the state-of-art system [8] trained on ground-truth metric depth.
Metric Error Measures. Our network is trained with relative depth, so it is unsurprising that it does
well in estimating depth up to ordering. But how good is the estimated depth in terms of metric
error? We thus evaluate conventional error measures such as RMSE (the root mean squared error),
which compares the absolute depth values to the ground truths. Because our network is trained
only on relative depth and does not know the range of the ground-truth depth values, to make these
error measures meaningful we normalize the depth predicted by our network such that the mean and
standard deviation are the same as those of the mean depth map of the training set. Tab. 2 reports the
results. We see that under these metric error measures our network still outperforms the method of
Zoran et al. [14]. In addition, while our metric error is worse than the current state-of-the-art, it is
comparable to some of the earlier methods (e.g. [1]) that have access to ground-truth metric depth.
Superpixel Sampling versus Random Sampling. To compare with the method by Zoran et al. [14],
we train our network using the same point pairs, which are pairs of centers of superpixels (Fig. 9). But
is superpixel segmentation necessary? That is, can we simply train with randomly sampled points?
To answer this question, we train our network with randomly sampled points. We constrain the
distance between the two points to be between 13 and 19 pixels (out of a 320?240 image) such
that the distance is similar to that between the centers of neighboring superpixels. The results are
included in Tab. 2. We see that using 3.3k pairs per image (rand_3K) already achieves comparable
performance to the method by Zoran et al. [14]. Using twice or four times as many pairs (rand_6K,
rand_12K) further improves performance and significantly outperforms [14].
It is worth noting that in all these experiments the test pairs are still from superpixels, so training on
random pairs incurs a mismatch between training and testing distributions. Yet we can still achieve
comparable performance despite this mismatch. This shows that our method can indeed operate
without superpixel segmentation.
6
Experiments on Depth in the Wild
In this section we experiment on our new Depth in the Wild (DIW) dataset. We split the dataset into
421K training images and 74K test images 5 .
We report the WHDR (Weighted Human Disagreement Rate) 6 of 5 methods in Tab. 3: (1) the
state-of-the-art system by Eigen et al. [8] trained on full NYU Depth; (2) our network trained on
full NYU Depth (Ours_Full); (3) our network pre-trained on full NYU Depth and fine-tuned on
DIW (Ours_NYU_DIW); (4) our network trained from scratch on DIW (Ours_DIW); (5) a baseline
method that uses only the location of the query points: classify the lower point to be closer or guess
randomly if the two points are at the same height (Query_Location_Only).
We see that the best result is achieved by pre-training on NYU Depth and fine-tuning on DIW. Training
only on NYU Depth (Ours_NYU and Eigen) does not work as well, which is expected because NYU
Depth only has indoor scenes. Training from scratch on DIW achieves slightly better performance
5
4.38% of images are duplicates downloaded using different query keywords and have more than one pairs
of points. We have removed test images that have duplicates in the training set.
6
All weights are 1. A pair of points can only have two possible ordinal relations (farther or closer) for DIW.
7
Input
Eigen
Ours_NYU_DIW
Input
Eigen
Ours_NYU_DIW
Figure 10: Qualitative results on our Depth in the Wild (DIW) dataset by our method and the method
of Eigen et al. [8]. More results are in the supplementary material.
Table 3: Weighted Human Disagreement Rate (WHDR) of various methods on our DIW dataset,
including Eigen(V), the method of Eigen et al. [8] (VGGNet [41] version)
Method
WHDR
Eigen(V) [8]
25.70%
Ours_Full
31.31%
Ours_NYU_DIW
14.39%
Ours_DIW
22.14%
Query_Location_Only
31.37%
than those trained on only NYU Depth despite using much less supervision. Pre-training on NYU
Depth and fine-tuning on DIW leaverages all available data and achieves the best performance. As
shown in Fig. 10, the quality of predicted depth is notably better with fine-tuning on DIW, especially
for outdoor scenes. These results suggest that it is promising to combine existing RGB-D data and
crowdsourced annotations to advance the state-of-the art in single-image depth estimation.
7
Conclusions
We have studied single-image depth perception in the wild, recovering depth from a single image
taken in unconstrained settings. We have introduced a new dataset consisting of images in the wild
annotated with relative depth and proposed a new algorithm that learns to estimate metric depth
supervised by relative depth. We have shown that our algorithm outperforms prior art and our
algorithm, combined with existing RGB-D data and our new relative depth annotations, significantly
improves single-image depth perception in the wild.
Acknowledgments
This work is partially supported by the National Science Foundation under Grant No. 1617767.
References
[1] K. Karsch, C. Liu, and S. B. Kang, ?Depthtransfer: Depth extraction from video using non-parametric
sampling,? TPAMI, 2014.
[2] D. Hoiem, A. A. Efros, and M. Hebert, ?Automatic photo pop-up,? TOG, 2005.
[3] A. Saxena, M. Sun, and A. Ng, ?Make3d: Learning 3d scene structure from a single still image,? TPAMI,
2009.
[4] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, ?Indoor segmentation and support inference from rgbd
images,? in ECCV, Springer, 2012.
[5] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, ?Vision meets robotics: The kitti dataset,? The International
Journal of Robotics Research, p. 0278364913491297, 2013.
[6] F. Liu, C. Shen, and G. Lin, ?Deep convolutional neural fields for depth estimation from a single image,?
in CVPR, 2015.
[7] L. Ladicky, J. Shi, and M. Pollefeys, ?Pulling things out of perspective,? in CVPR, IEEE, 2014.
[8] D. Eigen and R. Fergus, ?Predicting depth, surface normals and semantic labels with a common multi-scale
convolutional architecture,? in ICCV, 2015.
[9] M. H. Baig and L. Torresani, ?Coupled depth learning,? arXiv preprint arXiv:1501.04537, 2015.
[10] B. Li, C. Shen, Y. Dai, A. van den Hengel, and M. He, ?Depth and surface normal estimation from
monocular images using regression on deep features and hierarchical crfs,? in CVPR, 2015.
8
[11] W. W.-C. Chiu, U. Blanke, and M. Fritz, ?Improving the kinect by cross-modal stereo.,? in BMVC, 2011.
[12] A. Saxena, S. H. Chung, and A. Y. Ng, ?Learning depth from single monocular images,? in NIPS, 2005.
[13] J. T. Todd and J. F. Norman, ?The visual perception of 3-d shape from multiple cues: Are observers capable
of perceiving metric structure?,? Perception & Psychophysics, pp. 31?47, 2003.
[14] D. Zoran, P. Isola, D. Krishnan, and W. T. Freeman, ?Learning ordinal relationships for mid-level vision,?
in ICCV, 2015.
[15] A. Janoch, S. Karayev, Y. Jia, J. T. Barron, M. Fritz, K. Saenko, and T. Darrell, ?A category-level 3d object
dataset: Putting the kinect to work,? in Consumer Depth Cameras for Computer Vision, Springer, 2013.
[16] S. Song, S. P. Lichtenberg, and J. Xiao, ?Sun rgb-d: A rgb-d scene understanding benchmark suite,? in
CVPR, 2015.
[17] S. Choi, Q.-Y. Zhou, S. Miller, and V. Koltun, ?A large dataset of object scans,? arXiv preprint
arXiv:1602.02481, 2016.
[18] S. Bell, K. Bala, and N. Snavely, ?Intrinsic images in the wild,? TOG, 2014.
[19] J. T. Barron and J. Malik, ?Shape, illumination, and reflectance from shading,? TPAMI, 2015.
[20] A. Saxena, S. H. Chung, and A. Y. Ng, ?3-d depth reconstruction from a single still image,? IJCV, 2008.
[21] Y. Xiong, A. Chakrabarti, R. Basri, S. J. Gortler, D. W. Jacobs, and T. Zickler, ?From shading to local
shape,? TPAMI, 2015.
[22] C. Hane, L. Ladicky, and M. Pollefeys, ?Direction matters: Depth estimation with a surface normal
classifier,? in CVPR, 2015.
[23] B. Liu, S. Gould, and D. Koller, ?Single image depth estimation from predicted semantic labels,? in CVPR,
2010.
[24] E. Shelhamer, J. Barron, and T. Darrell, ?Scene intrinsics and depth from a single image,? in ICCV
Workshops, 2015.
[25] J. Shi, X. Tao, L. Xu, and J. Jia, ?Break ames room illusion: depth from general single images,? TOG,
2015.
[26] W. Zhuo, M. Salzmann, X. He, and M. Liu, ?Indoor scene structure analysis for single image depth
estimation,? in CVPR, 2015.
[27] Z. Zhang, A. G. Schwing, S. Fidler, and R. Urtasun, ?Monocular object instance segmentation and depth
ordering with cnns,? in ICCV, 2015.
[28] P. Wang, X. Shen, Z. Lin, S. Cohen, B. Price, and A. L. Yuille, ?Towards unified depth and semantic
prediction from a single image,? in CVPR, 2015.
[29] T. Zhou, P. Krahenbuhl, and A. A. Efros, ?Learning data-driven reflectance priors for intrinsic image
decomposition,? in ICCV, 2015.
[30] T. Narihira, M. Maire, and S. X. Yu, ?Learning lightness from human judgement on relative reflectance,?
in CVPR, IEEE, 2015.
[31] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, ?Overfeat: Integrated recognition,
localization and detection using convolutional networks,? arXiv preprint arXiv:1312.6229, 2013.
[32] D. Parikh and K. Grauman, ?Relative attributes,? in ICCV, IEEE, 2011.
[33] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li, ?Learning to rank: from pairwise approach to listwise
approach,? in ICML, ACM, 2007.
[34] T. Joachims, ?Optimizing search engines using clickthrough data,? in Proceedings of the eighth ACM
SIGKDD international conference on Knowledge discovery and data mining, ACM, 2002.
[35] D. Eigen, C. Puhrsch, and R. Fergus, ?Depth map prediction from a single image using a multi-scale deep
network,? in NIPS, 2014.
[36] J. Long, E. Shelhamer, and T. Darrell, ?Fully convolutional networks for semantic segmentation,? in CVPR,
2015.
[37] S. Xie and Z. Tu, ?Holistically-nested edge detection,? CoRR, vol. abs/1504.06375, 2015.
[38] A. Newell, K. Yang, and J. Deng, ?Stacked hourglass networks for human pose estimation,? arXiv preprint
arXiv:1603.06937, 2016.
[39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich,
?Going deeper with convolutions,? in CVPR, 2015.
[40] M. H. Baig, V. Jagadeesh, R. Piramuthu, A. Bhardwaj, W. Di, and N. Sundaresan, ?Im2depth: Scalable
exemplar based depth transfer,? in WACV, IEEE, 2014.
[41] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
arXiv preprint arXiv:1409.1556, 2014.
9
| 6489 |@word kohli:1 version:2 middle:1 judgement:1 seems:1 rgb:17 jacob:1 decomposition:1 pick:1 incurs:1 shading:2 harder:1 liu:7 series:2 exclusively:1 hoiem:2 salzmann:1 tuned:1 ours:4 outperforms:5 existing:6 lichtenberg:1 current:4 comparing:2 recovered:1 yet:5 must:1 shape:4 remove:2 hourglass:3 zik:2 cue:2 website:1 guess:1 accordingly:1 plane:1 farther:2 location:5 ames:1 simpler:3 zhang:2 height:1 zickler:1 viable:1 ik:8 qualitative:3 consists:10 koltun:1 ijcv:1 wild:25 combine:1 chakrabarti:1 introduce:1 pairwise:1 inter:2 notably:1 expected:1 indeed:2 rapid:1 roughly:1 nor:1 multi:3 inspired:1 globally:1 freeman:1 resolve:1 conv:3 spain:1 estimating:5 campus:1 project:1 maximizes:1 classifies:2 alexnet:1 kind:1 minimizes:1 unified:1 transformation:2 differentiation:1 suite:1 every:1 collecting:1 saxena:3 exactly:1 grauman:1 classifier:4 wrong:1 grant:1 gortler:1 before:1 local:1 todd:1 limit:1 despite:2 id:1 meet:1 twice:1 resembles:1 studied:1 collect:2 limited:3 range:3 jiadeng:1 acknowledgment:1 camera:2 enforces:1 testing:1 lecun:1 block:5 differs:1 x3:8 illusion:1 procedure:2 maire:1 elicit:1 bell:1 significantly:5 reject:1 narihira:2 word:1 road:1 pre:4 downsampled:1 suggest:2 cannot:1 close:1 impossible:1 seminal:2 www:1 conventional:1 map:6 center:8 shi:2 crfs:1 conv4:3 resolution:5 shen:3 amazon:1 rule:1 baig:3 coordinate:2 construction:1 user:1 us:2 superpixel:9 element:2 expensive:1 recognition:3 jk:8 predicts:5 module:4 preprint:5 wang:2 capture:1 sun:2 ordering:4 jagadeesh:1 removed:1 mentioned:1 ui:1 personal:1 trained:16 zoran:24 solving:2 purely:1 tog:3 yuille:1 localization:1 completely:1 easily:2 various:2 train:10 stacked:1 effective:2 query:13 artificial:1 tell:4 crowd:1 quite:1 whose:3 stanford:1 widely:1 supplementary:2 cvpr:11 drawing:1 reconstruct:1 otherwise:2 statistic:1 simonyan:1 highlighted:1 final:2 tpami:4 karayev:1 propose:1 reconstruction:1 qin:1 neighboring:2 cao:1 tu:1 achieve:2 gold:3 normalize:1 convergence:1 darrell:3 produce:4 stiller:1 object:6 kitti:3 wider:1 pose:2 exemplar:1 measured:1 keywords:2 progress:2 make3d:3 solves:1 recovering:2 predicted:9 skip:1 differ:1 direction:1 annotated:4 attribute:1 filter:3 cnns:1 human:14 material:2 assign:1 transparent:1 insert:1 proximity:1 sufficiently:1 around:1 ground:20 normal:3 exp:2 predict:4 driving:1 major:1 dictionary:1 achieves:4 efros:2 released:1 purpose:1 estimation:10 lenz:1 label:3 karsch:2 weighted:4 minimization:2 sensor:2 zhou:3 shelf:1 joachim:1 rank:2 indicates:1 mainly:1 superpixels:6 contrast:3 sigkdd:1 baseline:1 dim:2 inference:1 entire:1 integrated:1 relation:25 koller:1 going:1 tao:1 pixel:18 x11:3 issue:1 classification:1 among:1 overall:1 proposes:1 overfeat:1 art:11 constrained:2 fairly:1 psychophysics:1 equal:4 field:1 never:1 extraction:1 ng:3 sampling:4 manually:1 identical:1 represents:1 yu:1 icml:1 report:4 fundamentally:1 duplicate:3 torresani:1 randomly:4 national:1 interpolate:1 consisting:4 occlusion:1 ourselves:1 ab:1 detection:3 mining:1 evaluation:2 severe:1 extreme:1 behind:1 held:1 accurate:1 fu:1 capable:1 closer:13 worker:8 necessary:1 edge:2 tree:2 instance:1 classify:3 column:1 earlier:1 cover:1 rabinovich:1 cost:1 deviation:1 subset:3 front:2 too:2 pixelwise:1 unsurprising:1 answer:4 combined:2 bhardwaj:1 thanks:1 fritz:2 fundamental:1 international:2 standing:1 off:1 pool:4 augmentation:1 squared:1 reconstructs:1 choose:1 worse:1 zhao:1 chung:2 combing:1 li:3 szegedy:1 potential:3 exclude:1 diversity:1 matter:1 ranking:3 root:1 observer:1 break:1 doing:1 tab:6 recover:2 relied:1 crowdsourced:1 annotation:22 jia:4 rmse:4 contribution:3 minimize:1 accuracy:1 convolutional:6 miller:1 generalize:1 accurately:1 worth:1 researcher:1 flickr:2 sharing:1 definition:2 energy:2 pp:1 turk:1 di:1 sampled:6 dataset:25 ask:1 knowledge:3 car:1 improves:4 color:1 segmentation:9 back:1 higher:2 supervised:1 follow:1 xie:1 modal:1 zisserman:1 bmvc:1 furthermore:2 just:2 inception:4 horizontal:2 quality:1 pulling:1 zjk:3 name:1 normalized:1 accumulative:1 norman:1 hence:1 inspiration:1 equality:4 fidler:1 symmetric:7 semantic:6 x5:4 uniquely:1 encourages:3 ambiguous:1 demonstrate:1 performs:1 dawei:1 hane:1 reasoning:1 image:97 wise:11 recently:1 parikh:1 common:2 superior:1 cohen:1 he:2 refer:1 significant:1 anguelov:1 smoothness:1 tuning:3 unconstrained:9 automatic:1 access:2 supervision:1 surface:4 add:3 own:1 recent:5 perspective:1 optimizing:1 driven:2 inequality:1 outperforming:1 inconsistency:1 exploited:1 seen:1 captured:1 additional:3 relaxed:1 dai:1 isola:1 deng:2 novelty:3 maximize:1 strike:1 aggregated:1 multiple:3 full:5 sundaresan:1 cross:1 long:3 lin:2 equally:2 bigger:1 impact:1 prediction:7 variant:3 regression:1 scalable:1 vision:5 metric:33 essentially:1 arxiv:10 represent:1 achieved:1 robotics:2 addition:4 whereas:1 fine:4 median:1 biased:1 operate:2 unlike:1 thing:1 inconsistent:1 yang:2 presence:1 intermediate:1 noting:1 split:1 krishnan:1 variety:1 specular:1 zi:1 architecture:1 imperfect:1 idea:1 intrinsics:1 shift:1 whether:1 granted:1 stereo:1 song:1 spammer:1 passing:1 repeatedly:2 deep:15 useful:2 detailed:1 amount:2 rival:1 mid:1 clip:1 processed:1 category:1 simplest:1 http:1 generate:1 percentage:1 holistically:1 judging:1 sign:1 estimated:1 per:13 broadly:1 diverse:1 pollefeys:2 vol:1 redundancy:1 key:1 four:1 threshold:3 putting:1 wasteful:1 verified:1 year:1 enforced:1 respond:1 almost:2 reader:1 decide:1 groundtruth:1 reasonable:1 patch:1 geiger:1 draw:1 decision:1 krahenbuhl:1 comparable:3 entirely:2 layer:4 internet:1 interleaved:1 followed:1 bala:1 quadratic:1 constraint:3 ladicky:2 constrain:1 scene:16 x7:7 expanded:1 gould:1 combination:4 smaller:2 slightly:3 across:1 piramuthu:1 making:1 den:1 iccv:6 taken:4 resource:1 agree:6 monocular:3 fail:1 needed:2 ordinal:20 know:1 end:4 umich:2 photo:1 available:2 unreasonable:1 hierarchical:1 away:1 v2:1 disagreement:7 barron:3 xiong:1 alternative:1 eigen:22 denotes:1 reflectance:7 especially:2 silberman:1 malik:1 question:2 already:1 parametric:1 snavely:1 unclear:2 distance:3 concatenation:1 upsampling:1 collected:1 urtasun:2 reason:1 assuming:1 consumer:1 code:1 relationship:1 reed:1 ratio:1 downsampling:2 balance:1 sermanet:2 setup:1 mostly:4 potentially:1 design:7 implementation:1 clickthrough:1 conv2:3 convolution:4 datasets:11 discarded:1 benchmark:3 kinect:7 inv:1 introduced:2 pair:49 mechanical:1 specified:1 connection:1 optimized:1 puhrsch:1 engine:1 unequal:1 kang:1 barcelona:1 pop:1 nip:3 beyond:1 able:2 zhuo:1 below:1 perception:7 mismatch:2 indoor:6 eighth:1 including:3 green:1 video:1 natural:2 predicting:2 representing:1 lightness:4 mathieu:1 vggnet:2 coupled:1 prior:3 literature:1 understanding:1 discovery:1 relative:34 loss:8 fully:2 wacv:1 querying:4 versus:3 ingredient:2 annotator:1 validation:1 foundation:1 downloaded:1 shelhamer:2 vanhoucke:1 gather:1 consistent:1 conv1:3 principle:1 xiao:1 share:1 eccv:1 placed:2 supported:1 english:1 hebert:1 bias:3 deeper:1 conv3:2 absolute:2 sparse:2 van:1 listwise:1 depth:148 valid:2 avoids:1 stand:1 hengel:1 author:1 made:1 qualitatively:1 erhan:1 crowdsource:2 basri:1 fergus:4 search:1 why:1 table:4 promising:1 learn:7 transfer:1 symmetry:1 improving:2 constructing:1 protocol:2 reconciles:1 big:1 reconcile:1 fair:1 rgbd:1 pivotal:1 body:1 x1:10 fig:8 xu:1 lie:2 outdoor:2 house:2 third:1 learns:3 rk:5 choi:1 nyu:18 intrinsic:6 essential:1 workshop:1 adding:1 corr:1 illumination:1 chen:1 easier:1 michigan:1 led:1 simply:3 likely:2 visual:1 ordered:1 stitch:1 upsample:4 partially:2 monotonic:2 springer:2 nested:1 truth:20 chance:1 amt:2 dispersed:1 acm:3 newell:1 goal:1 ann:1 towards:2 room:1 price:1 man:1 feasible:1 lidar:1 hard:2 included:1 determined:2 specifically:1 uniformly:2 except:1 perceiving:1 schwing:1 called:1 arbor:1 meaningful:1 saenko:1 exception:1 chiu:1 support:1 scan:1 tsai:1 evaluate:3 scratch:2 crowdsourcing:2 |
6,068 | 649 | Learning Fuzzy Rule-Based Neural
Networks for Control
Charles M. Higgins and Rodney M. Goodman
Department of Electrical Engineering, 116-81
California Institute of Technology
Pasadena, CA 91125
Abstract
A three-step method for function approximation with a fuzzy system is proposed. First, the membership functions and an initial
rule representation are learned; second, the rules are compressed
as much as possible using information theory; and finally, a computational network is constructed to compute the function value.
This system is applied to two control examples: learning the truck
and trailer backer-upper control system, and learning a cruise control system for a radio-controlled model car.
1
Introduction
Function approximation is the problem of estimating a function from a set of examples of its independent variables and function value. If there is prior knowledge
of the type of function being learned, a mathematical model of the function can be
constructed and the parameters perturbed until the best match is achieved. However, if there is no prior knowledge of the function, a model-free system such as a
neural network or a fuzzy system may be employed to approximate an arbitrary
nonlinear function. A neural network's inherent parallel computation is efficient
for speed; however, the information learned is expressed only in the weights of the
network. The advantage of fuzzy systems over neural networks is that the information learned is expressed in terms of linguistic rules. In this paper, we propose a
method for learning a complete fuzzy system to approximate example data. The
membership functions and a minimal set of rules are constructed automatically from
the example data, and in addition the final system is expressed as a computational
350
Learning Fuzzy Rule-Based Neural Networks for Control
Pos
-1.0
0
1.0
5.0
Variable Value
Figure 1: Membership function example
(neural) network for efficient parallel computation of the function value, combining
the advantages of neural networks and fuzzy systems. The proposed learning algorithm can be used to construct a fuzzy control system from examples of an existing
control system's actions.
Hereafter, we will refer to the function value as the output variable, and the independent variables of the function as the input variables.
2
Fuzzy Systems
In a fuzzy system, a function is expressed in terms of membership functions and
rules. Each variable has membership functions which partition its range into overlapping classes (see figure 1). Given these membership functions for each variable,
a function may be expressed by making rules from the input space to the output
space and smoothly varying between them.
In order to simplify the learning of membership functions, we will specify a number
of their properties beforehand. First, we will use piecewise linear membership functions. We will also specify that membership functions are fully overlapping; that is,
at any given value of the variable the total membership sums to one. Given these
two properties of the membership functions, we need only specify the positions of
the peaks of the membership functions to completely describe them.
We define a fuzzy rule as if y then X, where y (the condition side) is a conjunction
in which each clause specifies an input variable and one of the membership functions associated with it, and X (the conclusion side) specifies an output variable
membership function.
3
Learning a Fuzzy System from Example Data
There are three steps in our method for constructing a fuzzy system: first, learn the
membership functions and an initial rule representation; second, simplify (compress)
the rules as much as possible using information theory; and finally, construct a
computational network with the rules and membership functions to calculate the
function value given the independent variables.
351
352
Higgins and Goodman
3.1
Learning the Membership Functions
Before learning, two parameters must be specified. First, the maximum allowable
RMS error of the approximation from the example data; second, the maximum
number of membership functions for each variable. The system will not exceed
this number of membership functions, but may use fewer if the error is reduced
sufficiently before the maximum number is reached.
3.1.1
Learning by Successive Approximation to the Target Function
The following procedure is performed to construct membership functions and a set
of rules to approximate the given data set. All of the rules in this step are eel/based, that is, they have a condition for every input variable; there is a rule for
every combination of input variables (eeIQ.
We begin with input membership functions at input extrema. The closest example
point to each "corner" of the input space is found and a membership function for
the output is added at its value at the corner point. The initial rule set contains
a rule for each corner, specifying the closest output membership function to the
actual value at that corner.
We now find the example point with the greatest RMS error from the current model
and add membership functions in eaeh variable at that point. Next, we construct
a new set of rules to approximate the function. Constructing rules simply means
determining the output membership function to associate with each cell. While
constructing this rule set, we also add any output membership functions which are
needed. The best rule for a given cell is found by finding the closest example point
to the rule (recall each rule specifies a point in the input space). If the output
value at this point is "too far" from the closest output membership function value,
this output value is added as a new output membership. After this addition has
been made, if necessary, the closest output membership function to the value at the
closest point is used as the conclusion of the rule. At this point, if the error threshold
has been reached or all membership functions are full, we exit. Otherwise, we go
back to find the point with the greatest error from the model and iterate again.
3.2
Simplifying the Rules
In order to have as simple a fuzzy system as possible, we would like to use the minimum possible number of rules. The initial cell-based rule set can be "compressed"
into a minimal set of rules; we propose the use of an information-theoretic algorithm
for induction of rules from a discrete data set [1] for this purpose. The key to the
use of this method is the interpretation of each of the original rules as a discrete
example. The rule set becomes a discrete data set which is input to a rule-learning
algorithm. This algorithm learns the best rules to describe the data set.
There are two components of the rule-learning scheme. First, we need a way to tell
which of two candidate rules is the best. Second, we need a way to search the space
of all possible rules in order to find the best rules without simply checking every
rule in the search space.
Learning Fuzzy Rule-Based Neural Networks for Control
3.2.1
Ranking Rules
Smyth and Goodman[2] have developed an information-theoretic measure of rule
value with respect to a given discrete data set. This measure is known as the
j-measure; defining a rule as if y then X, the j-measure can be expressed as follows:
.
p(Xly)
p(Xly)
J(Xly) = p(Xly) log2( p(X) ) + p(Xly) log2( p(X) )
[2] also suggests a modified rule measure, the J-measure:
J(Xly)
= p(y)j(Xly)
This measure discounts rules which are not as useful in the data set in order to
remove the effects of "noise" or randomness. The probabilities in both measures
are computed from relative frequencies counted in the given discrete data set.
Using the j-measure, examples wilt be combined only when no error is caused in the
prediction ofthe data set. The J-measure, on the other hand, will combine examples
even if some prediction ability of the data is lost . If we simply use the j-measure
to compress our original rule set, we don't get significant compression. However,
we can only tolerate a certain margin of error in prediction of our original rule set
and maintain the same control performance. In order to obtain compression, we
wish to allow some error, but not so much as the J-measure will create. We thus
propose the following measure, which allows a gradual variation of the amount of
noise tolerance:
I -e -ax
L(Xly) = f(p(y),a)j(XIY) where !(x,a) = 1- e- a
=
The parameter a may be set at 0+ to obtain the J-measure since !(x,O+)
x or
at 00 to obtain thej-measure, since f(x, 00) 1 (x> 0). Any value ofa between
o and 00 will result in an amount of compression between that of the J-measure
and the j-measure; thus if we are able to tolerate some error in the prediction of
the original rule set, we can obtain more compression than the j-measure could give
us, but not as much as the J-measure would require. We show an example of the
variation of a for the truck backer-upper control system in section 4.1.
=
3.2.2
Searching for the Best Rules
In [1], we presented an efficient method for searching the space of all possible rules to
find the most representative ones for discrete data sets. The basic idea is that each
example is a very specific (and quite perfect) rule. However, this rule is applicable
to only one example. We wish to generalize this very specific rule to cover as many
examples as possible, while at the same time keeping it as correct as possible. The
goodness-measures shown above are just the tool for doing this. If we calculate the
"goodness" of all the rules generated by removing a single input variable from the
very specific rule, then we will be able to tell if any of the slightly more general
rules generated from this rule are better. If so, we take the best and continue in this
manner until no more general rule with a higher "goodness" exists. When we have
performed this procedure on the very specific rule generated from each example
(and removed duplicates), we will have a set of rules which represents the data set.
353
354
Higgins and Goodman
Lateral inhibitory connecti~ns
Defuzzification
Input
Rules
Membership
Functions
Output
Membership
Functions
Figure 2: Computational network constructed from fuzzy system
3.3
Constructing a Network
Constructing a computational network to represent a given fuzzy system can be
accomplished as shown in figure 2. From input to output, layers represent input
membership functions, rules, output membership functions, and finally defuzzification. A novel feature of our network is the lateral links shown in figure 2 between
the outputs of various rules. These links allow inference with dependent rules.
3.3.1
The Layers of the Network
The first layer contains a node for every input membership function used in the rule
set. Each of these nodes responds with a value between zero and one to a certain
region of the input variable range, implementing a single membership function.
The second layer contains a node for each rule - each of these nodes represents
a fuzzy AND, implemented as a product. The third layer contains a node for
every output membership function. Each of these nodes sums the outputs from
each rule that concludes that output fuzzy set. The final node simply takes the
output memberships collected in the previous layer and performs a defuzzification
to produce the final crisp output by normalizing the weights from each output node
and performing a convex combination with the peaks of the output membership
functions.
3.3.2
The Problem with Dependent Rules and a Solution
There is a problem with the standard fuzzy inference techniques when used with
dependent rules. Consider a rule whose conditions are all contained in a more specific rule (i.e. one with more conditions) which contradicts its conclusion. Using
standard fuzzy techniques, the more general rule will drive the output to an intermediate value between the two conclusions. What we really want is that a more
general rule dependent on a more specific rule should only be allowed to fire to
the degree that the more specific rule is not firing. Thus the degree of firing of the
more specific rule should gate the maximum firing allowed for the more general
rule. This is expressed in network form in the links between the rule layer and the
output membership functions layer. The lateral arrows are inhibitory connections
which take the value at their input, invert it (subtract it from one), and multiply
it by the value at their output.
Learning Fuzzy Rule-Based Neural Networks for Control
'----
Truck and Trailer
'---'---Cab
'----
--)--
Angle
Truck
'----
---
Angle
'----
t
Loading
Dock
Y position
(of truck rear)
'----
Figure 3: The truck and trailer backer-upper problem
4
Experimental Results
In this section, we show the results of two experiments: first, a truck backer-upper
in simulation; and second, a simple cruise controller for a radio-controlled model
car constructed in our laboratory.
4.1
Truck and Trailer Backer-Upper
Jenkins and Yuhas [3] have developed by hand a very efficient neural network for
solving the problem of backing up a truck and trailer to a loading dock. The truck
and trailer backer-upper problem is parameterized in figure 3.
The function approximator system was trained on 225 example runs of the Yuhas
controller, with initial positions distributed symmetrically about the field in which
the truck operates. In order to show the effect of varying the number of membership
functions, we have fixed the maximum number of membership functions for the y
position and cab angle at 5 and set the maximum allowable error to zero, thus
guaranteeing that the system will fill out all of the allowed membership functions.
We varied the maximum number of truck angle membership functions from 3 to 9.
The effects of this are shown in figure 4. Note that the error decreases sharply and
then holds constant, reaching its minimum at 5 membership functions. The Yuhas
network performance is shown as a horizontal line. At its best, the fuzzy system
performs slightly better than the system it is approximating.
For this experiment, we set a goal of 33% rule compression. We varied the parameter
a in the L-measure for each rule set to get the desired compression. Note in figure 4
the performance of the system with compressed rules. The performance is in every
case almost identical to that of the original rule sets. The number of rules and the
amount of rule compression obtained can be seen in table 1.
4.2
Cruise Controller
In this section, we describe the learning of a cruise controller to keep a radio controlled model car driving at a constant speed in a circle. We designed a simple PD
controller to perform this task, and then learned a fuzzy system to perform the same
task. This example is not intended to suggest that a fuzzy system should replace
a simple PD controller, since the fuzzy system may represent far more complex
355
356
Higgins and Goodman
NO>
..
.
,
,
-
I~ ~
1\
I"-PO
\
\
MO>
DO>
\F IIIVS.. ...
... \. \\
...
.0>
1\
c;jj:jj
'1'.:'"
~
\
\ 1"'",
\
1 III
,0>
\
\
1\
,0>
\
YuI iooSy-
\\
\
\
. ... ~
Yuba Sv. . .
-~~----
..
S
,
J
?
..
~w~W?~Fm=~~~_
a) Cmtrol error: final y positim
5
,
7
I
Numw ofwct qIo m=bcnhip tunCllona
b) Cmtrol error: fmal truck angle
Figure 4: Results of experiments with the truck backer-upper
Number of Rules
CompressIOn
Cell-Based
Compressed
Number of truck angle membership functions
4
7
3
5
6
9
8
125
150
175
200
225
75
100
154
100
114
48
67
86
138
36% 33% 31% 33% 35% 31% 32%
Table 1: Number of rules and compression figures for learned TBU systems
functions, but rather to show that the fuzzy system can learn from real control data
and operate in real-time.
The fuzzy system was trained on 6 runs of the PD controller which included runs
going forward and backward, and conditions in which the car's speed was perturbed
momentarily by blocking the car or pushing it. Figure 5 shows the error trajectory
of both the hand-crafted PD and learned fuzzy control systems from rest. The car
builds speed until it reaches the desired set point with a well-damped response, then
holds speed for a while. At a later time, an obstacle was placed in the path of the
car to stop it and then removed; figure 5 shows the similar recovery responses of
both systems. It can be seen from the numerical results in table 2 that the fuzzy
system performs as well as the original PD controller.
No compression was attempted because the rule sets are already very small.
Time from 90% error to 10% error (s)
RMS error at steady state (uncal)
Time to correct after obstacle (s)
PD Controller
0.9
59
6.2
Learned Fuzzy System
0.7
45
6.2
Table 2: Analysis of cruise control performance
Learning Fuzzy Rule-Based Neural Networks for Control
? 00
..
~
...
n
"
I~ ~ .-
11'1
,
-'",y'
I/
Y
u...r
DO
V. . .
'"
? ~V'
V'W
-I
-.00
...
J
-
000
-.
too
1000
1500
:3(01)
21.C1O
1)0)
Timo(o)
a) PD COII.trol SYl1lem
~
g<o
soo
10(0
1500
lOCO
2100
:1100
Timo(o)
b) Fuzzy COII.troi System
Figure 5: Performance of PD controller vs. learned fuzzy system
5
Summary and Conclusions
We have presented a method which, given examples of a function and its independent variables, can construct a computational network based on fuzzy logic to
predict the function given the independent variables. The user must only specify
the maximum number of membership functions for each variable and the maximum
RMS error from the example data.
The final fuzzy system's actions can be explicitly explained in terms of rule firings.
If a system designer does not like some aspect of the learned system's performance,
he can simply change the rule set and the membership functions to his liking. This
is in direct contrast to a neural network system, in which he would have no recourse
but another round of training.
Acknowledgements
This work was supported in part by Pacific Bell, and in part by DARPA and ONR
under grant no. NOOOI4-92-J-1860.
References
[1] C. Higgins and R. Goodman, "Incremental Learning using Rule-Based Neural
Networks," Proceedings of the International Joint Conference on Neural Networks,
vol. 1, 875-880, July 1991.
[2] R. Goodman, C. Higgins, J. Miller, P. Smyth, "Rule-Based Networks for Classification and Probability Estimation," Neural Computation 4(6),781-804, November
1992.
[3] R. Jenkins and B. Yuhas, "A Simplified Neural-Network Solution through Problem Decomposition: The Case of the Truck Backer-Upper," Neural Computation
4(5), 647-9, September 1992.
357
PART IV
VISUAL
PROCESSING
| 649 |@word compression:10 loading:2 simulation:1 gradual:1 simplifying:1 decomposition:1 initial:5 contains:4 xiy:1 hereafter:1 existing:1 current:1 must:2 numerical:1 partition:1 remove:1 designed:1 v:1 fewer:1 timo:2 node:8 successive:1 mathematical:1 constructed:5 direct:1 yuhas:4 combine:1 manner:1 automatically:1 actual:1 becomes:1 begin:1 estimating:1 what:1 fuzzy:36 developed:2 extremum:1 finding:1 every:6 ofa:1 control:15 grant:1 before:2 engineering:1 firing:4 backer:8 path:1 specifying:1 suggests:1 range:2 lost:1 loco:1 procedure:2 bell:1 suggest:1 get:2 crisp:1 go:1 convex:1 recovery:1 rule:92 higgins:6 fill:1 his:1 searching:2 variation:2 target:1 user:1 smyth:2 associate:1 blocking:1 electrical:1 calculate:2 region:1 momentarily:1 decrease:1 removed:2 pd:8 trained:2 trol:1 solving:1 exit:1 completely:1 po:2 darpa:1 joint:1 various:1 describe:3 tell:2 quite:1 whose:1 otherwise:1 compressed:4 ability:1 final:5 advantage:2 propose:3 product:1 combining:1 produce:1 perfect:1 guaranteeing:1 incremental:1 implemented:1 xly:8 correct:2 implementing:1 require:1 trailer:6 really:1 hold:2 sufficiently:1 predict:1 mo:1 driving:1 purpose:1 estimation:1 applicable:1 radio:3 create:1 tool:1 modified:1 reaching:1 rather:1 varying:2 conjunction:1 linguistic:1 ax:1 contrast:1 inference:2 dependent:4 rear:1 membership:48 pasadena:1 going:1 backing:1 classification:1 field:1 construct:5 identical:1 represents:2 simplify:2 inherent:1 piecewise:1 duplicate:1 intended:1 fire:1 maintain:1 multiply:1 damped:1 yui:1 beforehand:1 necessary:1 iv:1 desired:2 circle:1 minimal:2 obstacle:2 cover:1 goodness:3 too:2 perturbed:2 sv:1 combined:1 peak:2 international:1 eel:1 again:1 corner:4 wilt:1 caused:1 ranking:1 explicitly:1 performed:2 later:1 doing:1 reached:2 parallel:2 rodney:1 miller:1 ofthe:1 generalize:1 trajectory:1 drive:1 randomness:1 reach:1 frequency:1 associated:1 stop:1 noooi4:1 recall:1 knowledge:2 car:7 back:1 tolerate:2 higher:1 specify:4 response:2 fmal:1 just:1 until:3 hand:3 horizontal:1 nonlinear:1 overlapping:2 effect:3 laboratory:1 round:1 steady:1 allowable:2 complete:1 theoretic:2 performs:3 novel:1 charles:1 clause:1 interpretation:1 he:2 refer:1 significant:1 add:2 closest:6 certain:2 onr:1 continue:1 accomplished:1 seen:2 minimum:2 employed:1 july:1 full:1 liking:1 match:1 controlled:3 prediction:4 basic:1 controller:10 represent:3 achieved:1 cell:4 invert:1 addition:2 want:1 goodman:7 operate:1 rest:1 symmetrically:1 exceed:1 intermediate:1 iii:1 iterate:1 fm:1 idea:1 rms:4 jj:2 action:2 useful:1 amount:3 discount:1 reduced:1 specifies:3 inhibitory:2 designer:1 discrete:6 vol:1 key:1 threshold:1 backward:1 sum:2 run:3 angle:6 parameterized:1 almost:1 layer:8 truck:16 sharply:1 aspect:1 speed:5 thej:1 performing:1 department:1 pacific:1 combination:2 slightly:2 contradicts:1 making:1 explained:1 dock:2 recourse:1 needed:1 jenkins:2 gate:1 original:6 compress:2 log2:2 pushing:1 build:1 approximating:1 added:2 already:1 responds:1 september:1 link:3 lateral:3 cruise:5 collected:1 induction:1 perform:2 upper:8 coii:2 november:1 defining:1 varied:2 arbitrary:1 specified:1 connection:1 california:1 learned:10 able:2 soo:1 greatest:2 scheme:1 technology:1 concludes:1 prior:2 acknowledgement:1 checking:1 determining:1 relative:1 fully:1 approximator:1 degree:2 summary:1 placed:1 supported:1 free:1 keeping:1 side:2 allow:2 institute:1 tolerance:1 distributed:1 forward:1 made:1 simplified:1 counted:1 far:2 qio:1 approximate:4 keep:1 logic:1 don:1 search:2 table:4 learn:2 ca:1 complex:1 constructing:5 arrow:1 noise:2 allowed:3 crafted:1 representative:1 n:1 position:4 wish:2 candidate:1 third:1 learns:1 removing:1 c1o:1 specific:8 normalizing:1 exists:1 cab:2 margin:1 subtract:1 smoothly:1 simply:5 visual:1 expressed:7 contained:1 goal:1 replace:1 change:1 included:1 operates:1 total:1 experimental:1 attempted:1 |
6,069 | 6,490 | On statistical learning via the lens of compression
Ofir David
Department of Mathematics
Technion - Israel Institute of Technology
ofirdav@tx.technion.ac.il
Shay Moran
Department of Computer Science
Technion - Israel Institute of Technology
shaymrn@cs.technion.ac.il
Amir Yehudayoff
Department of Mathematics
Technion - Israel Institute of Technology
amir.yehudayoff@gmail.com
Abstract
This work continues the study of the relationship between sample compression
schemes and statistical learning, which has been mostly investigated within the
framework of binary classification. The central theme of this work is establishing
equivalences between learnability and compressibility, and utilizing these equivalences in the study of statistical learning theory. We begin with the setting of
multiclass categorization (zero/one loss). We prove that in this case learnability
is equivalent to compression of logarithmic sample size, and that uniform convergence implies compression of constant size. We then consider Vapnik?s general
learning setting: we show that in order to extend the compressibility-learnability
equivalence to this case, it is necessary to consider an approximate variant of compression. Finally, we provide some applications of the compressibility-learnability
equivalences.
1
Introduction
This work studies statistical learning theory using the point of view of compression. The main theme
in this work is establishing equivalences between learnability and compressibility, and making an
effective use of these equivalences to study statistical learning theory.
In a nutshell, the usefulness of these equivalences stems from that compressibility is a combinatorial
notion, while learnability is a statistical notion. These equivalences, therefore, translate statistical
statements to combinatorial ones and vice versa. This translation helps to reveal properties that are
otherwise difficult to find, and highlights useful guidelines for designing learning algorithms.
We first consider the setting of multiclass categorization, which is used to model supervised learning
problems using the zero/one loss function, and then move to Vapnik?s general learning setting [23],
which models many supervised and unsupervised learning problems.
Zero/one loss function (Section 3) This is the setting in which sample compression schemes were
defined by Littlestone and Warmuth [16], as an abstraction of a common property of many learning
algorithms. For more background on sample compression schemes, see e.g. [16, 8, 9, 22].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We use an agnostic version of sample compression schemes, and show that learnability is equivalent
to some sort of compression. More formally, that any learning algorithm can be transformed to a
compression algorithm, compressing a sample of size m to a sub-sample of size roughly log(m), and
that such a compression algorithm implies learning. This statement is based on arguments that appear
in [16, 10, 11]. We conclude this part by describing some applications:
(i) Equivalence between PAC and agnostic PAC learning from a statistical perspective (i.e. in terms of
sample complexity). For binary-labelled classes, this equivalence follows from basic arguments in
Vapnik-Chervonenkis (VC) theory, but these arguments do not seem to extend when the number of
labels is large.
(ii) A dichotomy for sample compression - if a non-trivial compression exists (e.g. compressing
a sample of size m to a sub-sample of size m0.99 ), then a compression to logarithmic size exists
(i.e. to a sub-sample of size roughly log m). This dichotomy is analogous to the known dichotomy
concerning the growth function of binary-labelled classes: the growth function is either polynomial
(when the VC dimension is finite), or exponential (when the VC dimension is infinite).
(iii) Compression to constant size versus uniform convergence - every class with the uniform convergence property has a compression of constant size. The proof has two parts. The first part, which is
based on arguments from [18], shows that finite graph dimension (a generalization of VC dimension
for multiclass categorization [19]) implies compression of constant size. The second part, which uses
ideas from [1, 24, 7], shows that the uniform convergence rate is captured by the graph dimension.
In this part we improve upon the previously known bounds.
(iv) Compactness for learning - if finite sub-classes of a given class are learnable, then the class is
learnable as well. Again, for binary-labelled classes, such compactness easily follows from known
properties of VC dimension. For general multi-labeled classes we derive this statement using a
corresponding compactness property for sample compression schemes, based on the work by [2].
General learning setting (Section 4). We continue with investigating general loss functions. This
part begins with a simple example in the context of linear regression, showing that for general loss
functions, learning is not equivalent to compression. We then consider an approximate variant of
compression schemes, which was used by [13, 12] in the context of classification, and observe that
learnability is equivalent to possessing an approximate compression scheme, whose size is roughly
the statistical sample complexity. This is in contrast to (standard) sample compression schemes, for
which the existence of such an equivalence (under the zero/one loss) is a long standing open problem,
even in the case of binary classification [25]. We conclude the paper by showing that - unlike for
zero/one loss functions - for general loss functions, PAC learnability and agnostic PAC learnability
are not equivalent. In fact, this is derived for a loss function that takes just three values. The proof of
this non-equivalence uses Ramsey theory for hypergraphs. The combinatorial nature of compression
schemes allows to clearly identify the place where Ramsey theory is helpful. More generally, the
study of statistical learning theory via the lens of compression may shed light on additional useful
connections with different fields of mathematics.
We begin our investigation by breaking the definition of sample compression schemes into two parts.
The first part (which may seem useless at first sight) is about selection schemes. These are learning
algorithms whose output hypothesis depends on a selected small sub-sample of the input sample. The
second part of the definition is the sample-consistency guarantee; so, sample compression schemes
are selection schemes whose output hypothesis is consistent with the input sample. We then show
that selection schemes of small size do not overfit in that their empirical risk is close to their true
risk. Roughly speaking, this shows that for selection schemes there are no surprises: ?what you see is
what you get?.
2
Preliminaries
The definitions we use are based on the textbook [22].
Learnability and uniform convergence
A learning problem is specified by a set H of hypotheses, a domain Z of examples, and a loss function
` : H ? Z ? R+ . To ease the presentation, we shall only discuss loss functions that are bounded
2
from above by 1, although the results presented here can be extended to more general loss functions.
A sample S is a finite sequence S = (z1 , . . . , zm ) ? Z m . A learning algorithm is a mapping that
gets as an input a sample and outputs an hypothesis h.
In the context of supervised learning, hypotheses are functions from a domain X to a label set
Y, and the examples domain is the cartesian product Z := X ? Y. In this context, the loss
`(h, (x, y)) depends only on h(x) and y, and therefore in this case we it is modelled as a function
` : Y ? Y ? R+ .
Given a distribution D on Z, the risk of an hypothesis h : X ? Y is its expected loss: LD (h) =
Ez?D
Pm[`(h, z)] . Given a sample S = (z1 , . . . , zm ), the empirical risk of an hypothesis h is LS (h) =
1
i=1 `(h, z).
m
An hypothesis class H is a set of hypotheses. A distribution D is realizable by H if there exists h ? H
such that LD (h) = 0. A sample S is realizable by H if there exists h ? H such that LS (h) = 0.
A hypothesis class H has the uniform convergence property1 if there exists a rate function d :
(0, 1)2 ? N such that for every , ? > 0 and distribution D over Z, if S is a sample of m ? d(, ?)
i.i.d. pairs generated by D, then with probability at least 1?? we have: ?h ? H |LD (h)?LS (h)| ? .
The class H is agnostic PAC learnable if there exists a learner A and a rate function d : (0, 1)2 ? N
such that for every , ? > 0 and distribution D over Z, if S is a sample of m ? d(, ?) i.i.d. pairs
generated by D, then with probability at least 1 ? ? we have LD (A(S)) ? inf h?H LD (h) + . The
class H is PAC learnable if this condition holds for every realizable distribution D. The parameter
is referred to as the error parameter and ? as the confidence parameter.
Note that the uniform convergence property implies agnostic PAC learnability with the same rate
via any learning algorithm which outputs h ? H that minimizes the empirical risk, and that agnostic
PAC learnability implies PAC learnability with the same rate.
Selection and compression schemes
The variants of sample compression schemes that are discussed in this paper, are based on the
following object, which we term selection scheme. We stress here that unlike sample compression
schemes, selection schemes are not associated with any hypothesis class.
A selection scheme is a pair (?, ?) of maps for which the following holds:
? ? is called the selection map. It gets as an input a sample S and outputs a pair (S 0 , b) where
S 0 is a sub-sample2 of S and b is a finite binary string, which we think of as side information.
? ? is called the reconstruction map. It gets as an input a pair (S 0 , b) of the same type as the
output of ? and outputs an hypothesis h.
The size of (?, ?) on a given input sample S is defined to be |S 0 | + |b| where ?(S) = (S 0 , b). For an
input size m, we denote by k(m) the maximum size of the selection scheme on all inputs S of size at
most m. The function k(m) is called the size of the selection scheme. If k(m) is uniformly bounded
by a constant, which does not depend on m, then we say that the selection scheme has a constant
size; otherwise, we say that it has a variable size.
The definition of selection schemes is very similar to that of sample compression schemes. The
difference is that sample compression schemes are defined relative to a fixed hypothesis class with
respect to which they are required to have ?correct? reconstructions whereas selection schemes do not
provide any correctness guarantee. The distinction between the ?selection? part and the ?correctness?
part is helpful for our presentation, and also provides some more insight into these notions.
A selection scheme (?, ?) is a sample compression scheme for H if for every sample S that is
realizable by H, LS (? (? (S))) = 0. A selection scheme (?, ?) is an agnostic sample compression
scheme for H if for every sample S, LS (? (? (S))) ? inf h?H LS (h).
In the following sections, we will see different manifestations of the statement ?compression ?
learning?. An essential part of these statements boils down to a basic property of selection schemes,
1
We omit the dependence on the loss function ` from this and similar definitions, since ` is clear from the
context.
2
That is, if S = (z1 , . . . , zm ) then S 0 is of the form (zi1 , . . . , zi` ) for 1 ? i1 < . . . < i` ? m.
3
that as long as k(m) is sufficiently smaller than m, a selection scheme based learner does not overfit
its training data (the proof appears in the full version of this paper).
Theorem 2.1 ([22, Theorem 30.2]). Let (?, ?) be a selection scheme of size k = k(m), and let
A(S) = ? (? (S)). Then, for every distribution D on Z, integer m such that k ? m/2, and ? > 0,
we have
h
i
p
Pr m |LD (A (S)) ? LS (A (S))| ? ? LS (A (S)) + ? ?,
S?D
where = 50 k log(m/k)+log(1/?)
.
m
3
Zero/one loss functions
In this section we consider the zero/one loss function, which models categorization problems. We
study the relationships between uniform convergence, learnability, and sample compression schemes
under this loss. Subsection 3.1 establishes equivalence between learnability and compressibility of
a sublinear size. In Subsection 3.2 we use this equivalence to study the relationships between the
properties of uniform convergence, PAC, and agnostic learnability. In Subsection 3.2.1 we show that
agnostic learnability is equivalent to PAC learnability, In Subsection 3.2.2 we observe a dichotomy
concerning the size of sample compression schemes, and use it to establish a compactness property
of learnability. Finally, in Subsection 3.2.3 we study an extension of the Littlestone-Floyd-Warmuth
conjecture concerning an equivalence between learnability and sample compression schemes of fixed
size.
3.1
Learning is equivalent to sublinear compressing
The following theorem shows that if H has a sample compression scheme of size k = o(m), then it
is learnable. Its proof appears in the full version of this paper.
Theorem 3.1 (Compressing implies learning [16]). Let (?, ?) be a selection scheme of size k, let H
be a hypothesis class, and let D be a distribution on Z.
1. If (?, ?) is a sample compression scheme for H, and m is such that k(m) ? m/2, then
1
k log m
k + k + log ?
Pr m LD (? (? (S))) > 50
< ?.
S?D
m
2. If (?, ?) is an agnostic sample compression scheme for H, and m is such that k(m) ? m/2,
then
?
?
s
1
k log m
+
k
+
log
k
??
Pr ?LD (? (? (S))) > inf LD (h) + 100
< ?.
S?D m
h?H
m
The following theorem shows that learning implies compression. We present its proof in the full
version of this paper.
Theorem 3.2 (Learning implies compressing). Let H be an hypothesis class.
1. If H is agnostic PAC learnable with learning rate d(, ?), then it is PAC learnable with the
same learning rate.
2. If H is PAC learnable with learning rate d(, ?), then it has a sample compression scheme
of size k(m) = O(d0 log(m) log log(m) + d0 log(m) log(d0 )), where d0 = d(1/3, 1/3).
3. If H has a sample compression scheme of size k(m), then it has an agnostic sample
compression scheme of the same size.
Remark. The third part in Theorem 3.2 does not hold when the loss function is general. In Section 4
we show that even if the loss function takes three possible values, then there are instances where a
class has a sample compression scheme but not an agnostic sample compression scheme.
4
3.2
Applications
3.2.1
Agnostic and PAC learnability are equivalent
Theorems 3.1 and 3.2 imply that if H is PAC learnable, then it is agnostic PAC learnable. Indeed, a
summary of the implications between learnability and compression given by Theorems 3.1 and 3.2
gives:
? An agnostic learner with rate d (, ?) implies a PAC learner with rate d (, ?).
? A PAC learner with rate d (, ?) implies a sample compression scheme of size k (m) =
O (d0 ? log (m) log (d0 ? log (m))) where d0 = d(1/3, 1/3).
? A sample compression scheme of size k (m) implies an agnostic sample compression
scheme of size k (m).
? An agnostic sampleqcompression scheme of size k (m) implies an agnostic learner with
error (d, ?) = 100
k(d) log
d
+k(d)+log ?1
k(d)
d
.
Thus, for multiclass categorization problems, agnostic learnability and PAC learnability are equivalent.
When the size of the label set Y is O(1), this equivalence follows from previous works that studied
extensions of the VC dimension to multiclass categorization problems [24, 3, 19, 1]. These works
show that PAC learnability and agnostic PAC learnability are equivalent to the uniform convergence
property, and therefore any ERM algorithm learns the class. Recently, [7] separated PAC learnability
and uniform convergence for large label sets by exhibiting PAC learnable hypothesis classes that do
not satisfy the uniform convergence property. In contrast, this shows that the equivalence between
PAC and agnostic learnability remains valid even when Y is large.
3.2.2
A dichotomy and compactness
Let H be an hypothesis class. Assume that H has a sample compression scheme of size, say, m/500
for some large m. Therefore, by Theorem 3.1, H is weakly PAC learnable with confidence 2/3, error
1/3, and O(1) examples. Now, Theorem 3.2 implies that H has a sample compression scheme of size
k(m) ? O(log(m) log log(m)). In other words, the following dichotomy holds: every hypothesis
class H either has a sample compression scheme of size k(m) = O(log(m) log log(m)), or any
sample compression scheme for it has size ?(m).
This dichotomy implies the following compactness property for learnability under the zero/one loss.
Theorem 3.3. Let d ? N, and let H be an hypothesis class such that each finite subclass of H
is learnable with error 1/3, confidence 2/3 and d examples. Then H is learnable with error 1/3,
confidence 2/3 and O(d log2 (d) log log(d)) examples.
When Y = {0, 1}, the theorem follows by the observing that if every subclass of H has VC
dimension at most d, then the VC dimension of H is at most d. We are not aware of a similar
argument that applies for a general label set. A related challenge, which was posed by [6], is to find a
?combinatorial? parameter, which captures multiclass learnability like the VC dimension captures it
in the binary-labeled case.
A proof of Theorem 3.3 appears in the full version of this paper. It uses an analogous3 compactness
property for sample compression schemes proven by [2].
3.2.3
Uniform convergence versus compression to constant size
Since the introduction of sample compression schemes by [16], they were mostly studied in the
context of binary-labeled hypothesis classes (the case Y = {0, 1}). In this context, a significant
number of works were dedicated to studying the relationship between VC dimension and the minimal
size of a compression scheme (e.g. [8, 14, 9, 2, 15, 4, 21, 20, 17]). Recently, [18] proved that any class
of VC dimension d has a compression scheme of size exponential in the VC dimension. Establishing
whether a compression scheme of size linear (or even polynomial) in the VC dimension remains
open [9, 25].
3
Ben-David and Litman proved a compactness result for sample compression schemes when Y = {0, 1},
but their argument generalizes for a general Y.
5
This question has a natural extension to multiclass categorization: Does every hypothesis class H
have a sample compression scheme of size O(d), where d = dP AC (1/3, 1/3) is the minimal sample
complexity of a weak learner for H? In fact, in the case of multiclass categorization it is open whether
there is a sample compression scheme of size depending only on d.
We show here that the arguments from [18] generalize to uniform convergence.
Theorem 3.4. Let H be an hypothesis class with uniform convergence rate dU C (, ?). Then H has a
sample compression scheme of size exp(d), where d = dU C (1/3, 1/3).
The proof of this theorem uses the notion of the graph dimension, which was defined by [19].
Theorem 3.4 is proved using the following two ingredients. First, the construction in [18] yields a
sample compression scheme of size exp(dimG (H)). Second, the graph dimension determines the
uniform convergence rate, similarly to that the VC dimension does it in the binary-labeled case.
Theorem 3.5. Let H be an hypothesis class, let d = dimG (H), and let dU C (, ?) denote the uniform
convergence rate of H. Then, there exist constants C1 , C2 such that
C1 ?
d + log(1/?) ? C1
d log(1/) + log(1/?)
? dU C (, ?) ? C2 ?
.
2
2
Parts of this result are well-known and appear in the literature: The upper bound follows from
Theorem 5 of [7], and the core idea of the argument dates back to the articles of [1] and of [24]. A
lower bound with a worse dependence on follows from Theorem 9 of [7]. A proof of Theorem 3.5
appears in the full version of this paper.
4
General loss functions
We have seen that in the case of the zero/one loss function, an existence of a sublinear sample
compression scheme is equivalent to learnability. It is natural to ask whether this phenomenon
extends to other loss functions. The direction ?compression =? learning? remains valid for general
loss functions. In contrast, as will be discussed in this section, the other direction fails for general
loss functions.
However, a natural adaptation of sample compression schemes, which we term approximate sample
compression schemes, allows the extension of the equivalence to arbitrary loss functions. Approximate
compression schemes were previously studied in the context of classification (e.g. [13, 12]). In
Subsection 4.1 we argue that in general sample compression schemes are not equivalent to learnability;
specifically, there is no agnostic sample compression scheme for linear regression. In Subsection 4.2
we define approximate sample compression schemes and establish their equivalence with learnability.
Finally, in Subsection 4.3 we use this equivalence to demonstrate classes that are PAC learnable but
not agnostic PAC learnable. This manifests a difference with the zero/one loss under which agnostic
and PAC learning are equivalent (see 3.2.1). It is worth noting that the loss function we use to break
the equivalence takes only three values (compared to the two values of the zero/one loss function).
4.1
No agnostic compression for linear regression
We next show that in the setup of linear regression, which is known to be agnostic PAC learnable,
there is no agnostic sample compression scheme. For convenience, we shall restrict the discussion
to zero-dimensional linear regression. In this setup4 , the sample consists of m examples S =
(z1 , z2 , . . . , zm ) ? [0, 1]m , and the loss function is defined by `(h, z) = (h ? z)2 . The goal is to
find h ? R
empirical risk minimizer (ERM) is exactly the average
Pwhich minimizes LS (h). The
1
?
h? = m
z
,
and
for
every
h
=
6
h
we
have LS (h) > LS (h? ). Thus, an agnostic sample
i
i
compression scheme in this setup should compress S to a subsequence and a binary string of side
information, from which the average of S can be reconstructed. We prove that there is no such
compression.
Theorem 4.1. There is no agnostic sample compression scheme for zero-dimensional linear regression with size k(m) ? m/2.
4
One may think of X as a singleton.
6
The proof appears in the full version of this paper. The idea is to restrict our attention to sets ? ? [0, 1]
for which every subset of ? has a distinct average. It follows that any sample compression scheme
for samples from ? must perform a compression that is information theoretically impossible.
4.2
Approximate sample compression schemes
The previous example suggests the question of whether one can generalize the definition of compression to fit problems where the loss function is not zero/one. Taking cues from PAC and agnostic
PAC learning, we consider the following definition. We say that the selection scheme (?, ?) is an
-approximate sample compression scheme for H if for every sample S that is realizable by H,
LS (? (? (S))) ? . It is called an -approximate agnostic sample compression scheme for H if for
every sample S, LS (? (? (S))) ? inf h?H LS (h) + .
Let us start by revisiting the case of zero-dimensional linear regression. Even though it does not have
an agnostic compression scheme of sublinear size, it does have an -approximate agnostic sample
compression scheme of size k = O(log(1/)/) which we now describe.
Pm
Given a sample S = (z1 , . . . , zm ) ? [0, 1], the average h? = i=1 zi /m is the ERM of S. Let
!2
m
m
X
X
?
?
2
L = L(h ) =
zi /m ?
zi /m .
i=1
i=1
0
It isenough to show
that there exists a sub-sample S = (zi1 , . . . , zi` ) of size ` = d1/e such that
P`
?
0
LS
j=1 zij /` ? L + . It turns out that picking S at random suffices. Let Z1 , . . . , Z` be
P`
independent random variables that are uniformly
distributed over S and let H = 1` i=1 Zi be their
average. Thus, E[H] = h? and E LS (H) = L? + Var[H] ? L? + . In particular, this means that
there exists some sub-sample of size ` whose average has loss at most L? + . Encoding such a
sub-sample requires O(log(1/)/) additional bits of side information.
We now establish the equivalence between approximate compression and learning (the proof is similar
to the proof of Theorem 3.1).
Theorem 4.2 (Approximate compressing implies learning). Let (?, ?) be a selection scheme of size
k, let H be an hypothesis class, and let D be a distribution on Z.
1. If (?, ?) is an -approximate sample compression scheme for H, and m is such that k(m) ?
m/2, then
?
?
s
1
k log m
+
log
k
??
< ?.
Pr ?LD (? (? (S))) > + 100
S?D m
m
2. If (?, ?) is an -approximate agnostic sample compression scheme for H, and m is such that
k(m) ? m/2, then
?
?
s
1
+
log
k log m
k
??
Pr ?LD (? (? (S))) > inf LD (h) + + 100
< ?.
S?D m
h?H
m
The following Theorem shows that every learnable class has an approximate sample compression
scheme. The proof of this theorem is straightforward - in contrast with the proof of the analog
statement in the case of zero/one loss functions and compression schemes without error.
Theorem 4.3 (Learning implies approximate compressing). Let H be an hypothesis class.
1. If H is PAC learnable with rate d(, ?), then it has an -approximate sample compression
scheme of size k ? O(d log(d)) with d = min?<1 d(, ?).
2. If H is agnostic PAC learnable with rate d(, ?), then it has an -approximate agnostic
sample compression scheme of size k ? O(d log(d)) with d = min?<1 d(, ?).
The proof appears in the full version of this paper.
7
4.3
A separation between PAC and agnostic learnability
Here we establish a separation between PAC and agnostic PAC learning under loss functions which
take more than two values:
Theorem 4.4. There exist hypothesis classes H ? Y X and loss function l : Y ? Y ? {0, 12 , 1} such
that H is PAC learnable and not agnostic PAC learnable.
The main challenge in proving this theorem is showing that H is not agnostic PAC learnable. We
do this by showing that H does not have an approximate sample compression scheme. The crux of
the argument is an application of Ramsey theory; the combinatorial nature of compression allows to
identify the place where Ramsey theory is helpful. The proof appears in the full version of this paper.
5
Discussion and further research
The compressibility-learnability equivalence is a fundamental link in statistical learning theory. From
a theoretical perspective this link can serve as a guideline for proving both negative/impossibility
results, and positive/possibility results.
From the perspective of positive results, just recently, [5] relied on this paper in showing that
every learnable problem is learnable with robust generalization guarantees. Another important
example appears in the work of boosting weak learners [11] (see Chapter 4.2). These works follow a
similar approach, that may be useful in other scenarios: (i) transform the given learner to a sample
compression scheme, and (ii) utilize properties of compression schemes to derive the desired result.
The same approach is also used in this paper in Section 3.2.1, where it is shown that PAC learning
implies agnostic PAC learning under 0/1 loss; we first transform the PAC learner to a realizable
compression scheme, and then use the realizable compression scheme to get an agnostic compression
scheme that is also an agnostic learner. We note that we are not aware of a proof that directly
transforms the PAC learner to an agnostic learner without using compression.
From the perspective of impossibility/hardness results, this link implies that to show that a problem is
not learnable, it suffices to show that it is not compressible. In Section 4.3, we follow this approach
when showing that PAC and agnostic PAC learnability are not equivalent for general loss functions.
This link may also have a practical impact, since it offers a thumb rule for algorithm designers; if a
problem is learnable then it can be learned by a compression algorithm, whose design boils down to
an intuitive principle ?find a small insightful subset of the input data.? For example, in geometrical
problems, this insightful subset often appears on the boundary of the data points (see e.g. [12]).
References
[1] S. Ben-David, N. Cesa-Bianchi, D. Haussler, and P. M. Long. Characterizations of learnability for classes
of {0,...,n}-valued functions. J. Comput. Syst. Sci., 50(1):74?86, 1995. 2, 5, 6
[2] Shai Ben-David and Ami Litman. Combinatorial Variability of Vapnik-Chervonenkis Classes with
Applications to Sample Compression Schemes. Discrete Applied Mathematics, 86(1):3?25, 1998. 2, 5
[3] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. Learnability and the
Vapnik-Chervonenkis dimension. J. Assoc. Comput. Mach., 36(4):929?965, 1989. 5
[4] A. Chernikov and P. Simon. Externally definable sets and dependent pairs. Israel Journal of Mathematics,
194(1):409?425, 2013. 5
[5] Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, and Zhiwei Steven Wu. Adaptive learning
with robust generalization guarantees. In Proceedings of the 29th Conference on Learning Theory, COLT
2016, New York, USA, June 23-26, 2016, pages 772?814, 2016. 8
[6] A. Daniely and S. Shalev-Shwartz. Optimal learners for multiclass problems. In COLT, volume 35, pages
287?316, 2014. 5
[7] Amit Daniely, Sivan Sabato, Shai Ben-David, and Shai Shalev-Shwartz. Multiclass learnability and the
ERM principle. Journal of Machine Learning Research, 16:2377?2404, 2015. 2, 5, 6
[8] S. Floyd. Space-Bounded Learning and the Vapnik-Chervonenkis Dimension. In COLT, pages 349?364,
1989. 1, 5
8
[9] Sally Floyd and Manfred K. Warmuth. Sample Compression, Learnability, and the Vapnik-Chervonenkis
Dimension. Machine Learning, 21(3):269?304, 1995. 1, 5
[10] Yoav Freund. Boosting a weak learning algorithm by majority. Inf. Comput., 121(2):256?285, 1995. 2
[11] Yoav Freund and Robert E. Schapire. Boosting: Foundations and Algorithms. Adaptive computation and
machine learning. MIT Press, 2012. 2, 8
[12] Lee-Ad Gottlieb, Aryeh Kontorovich, and Pinhas Nisnevitch. Nearly optimal classification for semimetrics.
CoRR, abs/1502.06208, 2015. 2, 6, 8
[13] Thore Graepel, Ralf Herbrich, and John Shawe-Taylor. PAC-Bayesian Compression Bounds on the
Prediction Error of Learning Algorithms for Classification. Machine Learning, 59(1-2):55?76, 2005. 2, 6
[14] D. P. Helmbold, R. H. Sloan, and M. K. Warmuth. Learning integer lattices. SIAM J. Comput., 21(2):240?
266, 1992. 5
[15] Dima Kuzmin and Manfred K. Warmuth. Unlabeled compression schemes for maximum classes. Journal
of Machine Learning Research, 8:2047?2081, 2007. 5
[16] Nick Littlestone and Manfred Warmuth. Relating data compression and learnability. Unpublished, 1986.
1, 2, 4, 5
[17] Roi Livni and Pierre Simon. Honest compressions and their application to compression schemes. In COLT,
pages 77?92, 2013. 5
[18] Shay Moran and Amir Yehudayoff. Sample compression schemes for VC classes. J. ACM, 63(3):21:1?
21:10, June 2016. 2, 5, 6
[19] B. K. Natarajan. On learning sets and functions. Machine Learning, 4:67?97, 1989. 2, 5, 6
[20] B. I. P. Rubinstein and J. H. Rubinstein. A geometric approach to sample compression. Journal of Machine
Learning Research, 13:1221?1261, 2012. 5
[21] Benjamin I. P. Rubinstein, Peter L. Bartlett, and J. H. Rubinstein. Shifting: One-inclusion mistake bounds
and sample compression. J. Comput. Syst. Sci., 75(1):37?59, 2009. 5
[22] Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms.
Cambridge University Press, New York, NY, USA, 2014. 1, 2, 4
[23] Vladimir Vapnik. Statistical learning theory. Wiley, 1998. 1
[24] V.N. Vapnik and A.Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to
their probabilities. Theory Probab. Appl., 16:264?280, 1971. 2, 5, 6
[25] Manfred K. Warmuth. Compressing to VC dimension many points. In COLT/Kernel, pages 743?744, 2003.
2, 5
9
| 6490 |@word version:9 polynomial:2 compression:113 open:3 ld:12 zij:1 chervonenkis:6 ramsey:4 com:1 z2:1 gmail:1 must:1 john:1 ligett:1 cue:1 selected:1 amir:3 warmuth:8 core:1 manfred:5 provides:1 boosting:3 characterization:1 compressible:1 herbrich:1 c2:2 aryeh:1 prove:2 consists:1 theoretically:1 indeed:1 hardness:1 expected:1 roughly:4 multi:1 begin:3 spain:1 bounded:3 agnostic:46 israel:4 what:2 minimizes:2 string:2 textbook:1 guarantee:4 every:16 subclass:2 growth:2 nutshell:1 shed:1 litman:2 exactly:1 assoc:1 dima:1 omit:1 appear:2 positive:2 mistake:1 encoding:1 mach:1 establishing:3 studied:3 equivalence:23 suggests:1 appl:1 ease:1 practical:1 empirical:4 confidence:4 word:1 get:5 convenience:1 close:1 selection:23 nisnevitch:1 unlabeled:1 context:8 risk:6 impossible:1 equivalent:14 map:3 roth:1 straightforward:1 attention:1 l:16 helmbold:1 insight:1 rule:1 utilizing:1 haussler:2 ralf:1 proving:2 notion:4 analogous:1 construction:1 us:4 designing:1 hypothesis:26 natarajan:1 continues:1 labeled:4 steven:1 capture:2 revisiting:1 compressing:8 benjamin:1 complexity:3 depend:1 weakly:1 serve:1 upon:1 learner:14 easily:1 chapter:1 tx:1 separated:1 distinct:1 effective:1 describe:1 dichotomy:7 rubinstein:4 shalev:3 whose:5 posed:1 valued:1 say:4 katrina:1 otherwise:2 think:2 transform:2 sequence:1 reconstruction:2 product:1 zm:5 adaptation:1 date:1 translate:1 intuitive:1 convergence:18 categorization:8 ben:5 object:1 help:1 derive:2 depending:1 ac:3 c:1 implies:18 exhibiting:1 direction:2 correct:1 vc:16 crux:1 suffices:2 generalization:3 investigation:1 preliminary:1 pwhich:1 extension:4 hold:4 zhiwei:1 sufficiently:1 yehudayoff:3 roi:1 exp:2 mapping:1 anselm:1 m0:1 combinatorial:6 label:5 vice:1 correctness:2 establishes:1 mit:1 clearly:1 sight:1 derived:1 june:2 impossibility:2 contrast:4 realizable:7 helpful:3 abstraction:1 dependent:1 compactness:8 transformed:1 i1:1 classification:6 colt:5 field:1 aware:2 unsupervised:1 definable:1 nearly:1 ab:1 possibility:1 ofir:1 light:1 implication:1 necessary:1 iv:1 taylor:1 littlestone:3 desired:1 theoretical:1 minimal:2 instance:1 yoav:2 lattice:1 subset:3 daniely:2 uniform:18 technion:5 usefulness:1 learnability:42 semimetrics:1 fundamental:1 siam:1 standing:1 lee:1 picking:1 kontorovich:1 again:1 central:1 cesa:1 worse:1 kobbi:1 syst:2 singleton:1 satisfy:1 sloan:1 depends:2 ad:1 view:1 break:1 observing:1 start:1 sort:1 relied:1 shai:5 simon:2 il:2 yield:1 identify:2 generalize:2 modelled:1 weak:3 thumb:1 bayesian:1 worth:1 definition:7 frequency:1 proof:16 associated:1 boil:2 proved:3 ask:1 manifest:1 subsection:8 graepel:1 back:1 appears:9 cummings:1 supervised:3 follow:2 though:1 just:2 overfit:2 reveal:1 thore:1 usa:2 true:1 ehrenfeucht:1 floyd:3 manifestation:1 stress:1 demonstrate:1 dedicated:1 geometrical:1 possessing:1 recently:3 common:1 volume:1 extend:2 hypergraphs:1 discussed:2 analog:1 relating:1 significant:1 versa:1 cambridge:1 consistency:1 mathematics:5 pm:2 similarly:1 inclusion:1 shawe:1 perspective:4 inf:6 scenario:1 binary:10 continue:1 captured:1 seen:1 additional:2 ii:2 full:8 stem:1 d0:7 offer:1 long:3 concerning:3 impact:1 prediction:1 variant:3 regression:7 basic:2 kernel:1 c1:3 background:1 whereas:1 sabato:1 unlike:2 seem:2 integer:2 noting:1 iii:1 enough:1 sample2:1 fit:1 zi:6 restrict:2 idea:3 multiclass:10 honest:1 whether:4 bartlett:1 peter:1 york:2 speaking:1 remark:1 useful:3 generally:1 clear:1 transforms:1 schapire:1 exist:2 designer:1 discrete:1 shall:2 sivan:1 utilize:1 graph:4 you:2 place:2 extends:1 rachel:1 wu:1 separation:2 bit:1 bound:5 argument:9 min:2 conjecture:1 department:3 smaller:1 making:1 pr:5 erm:4 previously:2 remains:3 describing:1 discus:1 turn:1 studying:1 generalizes:1 observe:2 pierre:1 existence:2 compress:1 andrzej:1 pinhas:1 log2:1 amit:1 establish:4 move:1 question:2 dependence:2 dp:1 link:4 sci:2 majority:1 nissim:1 argue:1 trivial:1 useless:1 relationship:4 vladimir:1 difficult:1 mostly:2 setup:2 robert:1 statement:6 negative:1 design:1 guideline:2 perform:1 bianchi:1 upper:1 finite:6 extended:1 zi1:2 variability:1 compressibility:7 arbitrary:1 david:7 pair:6 required:1 specified:1 unpublished:1 connection:1 z1:6 nick:1 distinction:1 learned:1 barcelona:1 nip:1 challenge:2 shifting:1 event:1 natural:3 scheme:99 improve:1 technology:3 imply:1 probab:1 literature:1 geometric:1 understanding:1 relative:2 freund:2 loss:38 highlight:1 sublinear:4 proven:1 versus:2 ingredient:1 var:1 foundation:1 shay:2 consistent:1 article:1 principle:2 translation:1 summary:1 side:3 institute:3 taking:1 livni:1 distributed:1 boundary:1 dimension:21 valid:2 adaptive:2 reconstructed:1 approximate:19 investigating:1 conclude:2 shwartz:3 subsequence:1 nature:2 robust:2 du:4 investigated:1 domain:3 main:2 kuzmin:1 referred:1 ny:1 wiley:1 sub:9 theme:2 fails:1 exponential:2 comput:5 breaking:1 third:1 learns:1 externally:1 down:2 theorem:29 pac:47 showing:6 insightful:2 moran:2 learnable:27 exists:8 essential:1 vapnik:9 corr:1 cartesian:1 surprise:1 logarithmic:2 ez:1 sally:1 applies:1 minimizer:1 determines:1 acm:1 goal:1 presentation:2 blumer:1 labelled:3 infinite:1 specifically:1 uniformly:2 ami:1 gottlieb:1 lens:2 called:4 ya:1 aaron:1 formally:1 d1:1 phenomenon:1 |
6,070 | 6,491 | Robust Spectral Detection of Global Structures in the
Data by Learning a Regularization
Pan Zhang
Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China
panzhang@itp.ac.cn
Abstract
Spectral methods are popular in detecting global structures in the given data that
can be represented as a matrix. However when the data matrix is sparse or noisy,
classic spectral methods usually fail to work, due to localization of eigenvectors
(or singular vectors) induced by the sparsity or noise. In this work, we propose
a general method to solve the localization problem by learning a regularization
matrix from the localized eigenvectors. Using matrix perturbation analysis, we
demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method
in several inference problems: community detection in networks, clustering from
pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem
and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data
matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise.
1
Introduction
In many statistical inference problems, the task is to detect, from given data, a global structure such
as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually
have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are
popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular
vectors. In the point-of-view of inference, data can be seen as measurements to the underlying
structure. Thus more data gives more precise information about the underlying structure.
However in many situations when we do not have enough measurements, i.e. the data matrix is
sparse, standard spectral methods usually have localization problems thus do not work well. One
example is the community detection in sparse networks, where the task is to partition nodes into
groups such that there are many edges connecting nodes within the same group and comparatively
few edges connecting nodes in different groups. It is well known that when the graph has a large
connectivity c, simply using the first few eigenvectors of the adjacency matrix A ? {0, 1}n?n
(with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good
result. In this case, like that of a sufficiently dense Erd?os-R?enyi (ER) random
graph with average
?
degree c, the spectral density follows Wigner?s semicircle rule, P (?) = 4c ? ?2 /2?c, and there
is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the
underlying community structure. However when the network is large and sparse, the spectral density
of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the
bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues
(which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
degree nodes, thus reveal only local structures about large degrees rather than the underlying global
structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk
matrix, normalized Laplacian, all have localization problems but on different local structures such
as dangling trees.
Another example is the matrix?completion problem which asks to infer missing entries of matrix
A ? Rm?n with rank r mn from only few observed entries. A popular method for this
problem is based on the singular value decomposition (SVD) of the data matrix. However it is
well known that when the matrix is sparse, SVD-based method performs very poorly, because the
singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated
on high-weight column or row indices.
A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13]
which sets to zero columns or rows with a large degree or weight. However trimming throws away
part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion
problem [25].
In recent years, many methods have been proposed for the sparsity-problem. One kind of methods
use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or
its variance a rank-one regularization matrix [2, 11, 16?18, 23]. These methods are quite successful
in some inference problems in the sparse regime. However in our understanding none of them works
in a general way to solve the localization problem. For instance, the non-backtracking matrix and
the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have
again the localization problems when the system has short loops or sub-structures like triangles and
cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations
have been used for a long time in practice, the most famous example is the ?teleportation? term
in the Google matrix. However there is no satisfactory way to determine the optimal amount of
regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian,
the rank-one regularization approach is also sensitive to the noise, as we will show in the paper.
The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper
regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse
graphs can be put into the framework of regularization. Thus the drawbacks of existing methods
can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good
regularization that is dedicated for the given data, rather than taking a fixed-form regularization as
in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that
correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems,
including the community detection in sparse graphs, clustering from sparse pairwise entries, rank
estimation and matrix completion from few entries.
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
?3
0
0
3
?3
0
3
Figure 1: Spectral density of the adjacency matrix (left) and X-Laplacian (right) of a graph generated
by the stochastic block model with n = 10000 nodes, average degree c = 3, q = 2 groups and
= 0.125. Red arrows point to eigenvalues out of the bulk.
2
2
Regularization as a unified framework
We see that the above three methods for the community detection problem in sparse graphs, i.e.
trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix
?
L = A? + R.
(1)
Here matrix A? is the data matrix or its (symmetric) variance, such as A? = D?1/2 AD?1/2 with
? is a regularization matrix. The rank-one
D denoting the diagonal matrix of degrees, and matrix R
regularization approaches [2, 11, 16?18, 23] fall naturally into this framework as they set R to be a
rank-one matrix, ??11T , with ? being a tunable parameter controlling strength of regularizations.
? contains entries
It is also easy to see that in the trimming, A? is set to be the adjacency matrix and R
to remove columns or rows with high degrees from A.
For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an
eigenvalue ? of the non-backtracking operator satisfies the following quadratic eigenvalue equation,
det[?2 I ? ?A + (D ? I)] = 0,
where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector
of the non-backtracking matrix satisfies (A ? D?I
? )v = ?v. Thus spectral clustering algorithm
using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix
? = D?I , and ? acting as a parameter. We note here that
with form in Eq. (1), while A? = A, R
?
the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a
range of parameters work well in practice, like those estimated from the spin-glass transition of the
system [24]. So we have related different approaches of resolving localizations of spectral algorithm
in sparse graphs into the framework of regularization. Although this relation is in the context of
community detection in networks, we think it is a general point-of-view, when the data matrix has a
general form rather than a {0, 1} matrix.
As we have argued in the introduction, above three ways of regularization work from case to case
and have different problems, especially when system has noise. It means that in the framework
? added by these methods do not work in a
of regularizations, the effective regularization matrix R
general way and is not robust. In our understanding, the problem arises from the fact that in all
these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons
for the localization. Thus one way to solve the problem would be looking for the regularizations
that are specific for the given data, as a feature. In the following section we will introduce our
method explicitly addressing how to learn such regularizations from localized eigenvectors of the
data matrix.
3
Learning regularizations from localized eigenvectors
The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors
have large eigenvalues, due to the localization which represent the local structures of the system. In
the complementary side, if these eigenvectors are not localized, they are supposed to have smaller
eigenvalues than the informative ones which reveal the global structures of the graph. This is the
main assumption that our idea is based on.
Pn
In this work we use the Inverse Participation Ratio (IPR), I(v) = i=1 vi4 , to quantify the amount
of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from n1 for vector { ?1n , ?1n , ..., ?1n } to 1 for vector
{0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v.
Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A + X, where matrix A is
the data matrix (or its variant), and X is learned using the procedure detailed below:
3
Algorithm 1: Regularization Learning
Input: Real symmetric matrix A, number of eigenvectors q, learning rate ? = O(1), threshold ?.
Output: X-Laplacian, LX , whose leading eigenvectors reveal the global structures in A.
1. Set X to be all-zero matrix.
2. Find set of eigenvectors U = {u1 , u2 , ..., uq } associated with the first q largest
eigenvalues (in algebra) of LX .
3. Identify the eigenvector v that has the largest inverse participation ratio among the q
eigenvectors in U . That is, find v = argmaxu?U I(u).
4. if I(v) < ?, return LX = A + X; Otherwise, ?i, Xii ? Xii ? ?vi2 , then go to step 2.
We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned
gradually from the most localized vector among the first several eigenvectors. The effect of X is to
penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus
are supposed to correlate with the global structure rather than the local structures. As an example,
we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the
adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse
network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in
the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle,
covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density
of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues
part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This
is because due to the effect of X, the eigenvalues that are associated with localized eigenvectors in
the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the
informative eigenvalue (being pointed by the left red arrow in the figure).
The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of
matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning
rate ? = 10 and threshold ? = 5/n. As ? = O(1) and vi2 = O(1/n), we can treat the learned entries
? as a perturbation to matrix LX . After applying this perturbation, we anticipate that
in each step, L,
? i , and an eigenvector changes from ui to ui + u
an eigenvalue of L changes from ?i to ?i + ?
?i . If
we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about
? i = uT Lu
? i . Derivation of the above expression is straightforward, but
are distinct, then we have ?
i
? is a diagonal matrix
for the completeness we put the derivations in the SI text. In our algorithm, L
2
?
with entries Lii = ??vi with v denoting the identified eigenvector who has the largest inverse
? i = ?? P v 2 u2 . For the identified vector
participation ratio, so last equation can be written as ?
k k ik
v, we further have
X
? v = ??
?
vi4 = ??I(v).
(2)
i
It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased
by amount ?I(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue.
In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given
by the identified vector v, and obtained (see SI for the derivations) the change
of an eigenvector u
?i
P
2
P
k ujk vk uik
as a function of all the other eigenvalues and eigenvectors, u
?i = j6=i
uj . Then the
?i ??j
inverse participation ratio of the new vector ui + u
?i can be written as
I(ui + u
?i ) = I(ui ) ? 4?
n X 2 2 4
X
ujl vl uil
l=1 j6=i
?i ? ?j
? 4?
n XX 3 2
X
u v ujk uik ujl
il k
l=1 j6=i k6=l
?i ? ?j
.
(3)
Pn P
u2 vl2 u4il
As eigenvectors ui and uj are orthogonal to each other, the term 4? l=1 j6=i ?jli ??
can be
j
seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see
that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the
4
leading eigenvector corresponding to the largest eigenvalue ?i = ?1 , it is straightforward to see that
the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always
decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument
for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors
and so on, though ?i ? ?j is not strictly positive, there are much more positive terms than negative
terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude
that the process of learning X makes first few eigenvectors de-localizing.
An example illustrating the process of the learning is shown in Fig. 2 where we plot the second
eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning,
both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors
are mixed together indicating that two eigenvectors are not correlated with the planted partition. At
t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that
the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships.
Moreover we can see that the range of entries of eigenvectors shrink to [?0.06, 0.06], giving a small
inverse participation ratio.
Figure 2: The second eigenvector V2 compared with the third eigenvector V3 of LX for a network at
three steps with t = 0, 4 and 25 during learning. The network has n = 42000 nodes, q = 3 groups,
average degree c = 3, = 0.08, three colors represent group labels in the planted partition.
4
Numerical evaluations
In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed
data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime.
4.1
Community Detection
First we use synthetic networks generated by the stochastic block model [9], and its variant with
noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is
a popular model to generate ensemble of networks with community structure. There are q groups
of nodes and a planted partition {t?i } ? {1, ..., q}. Edges are generated independently according
to a q ? q matrix {pab }. Without loss of generality here we discuss the commonly studied case
where the q groups have equal size and where {pab } has only two distinct entries, pab = cin /n if
a = b and cout /n if a 6= b. Given
the average
degree of the graph, there is a so-called detectability
?
?
transition ? = cout /cin = ( c ? 1)/( c ? 1 + q) [7] , beyond which point it is not possible to
obtain any information about the planted partition. It is also known spectral algorithms based on
the non-backtracking matrix succeed all the way down to the transition [15]. This transition was
recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using
different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as
well as the non-backtracking matrix, down to the detectability transition. While the direct use of the
adjacency matrix, i.e. LX before learning, does not work well when exceeds about 0.1.
In the right panel of Fig. 3, each network is generated by the stochastic block model with the same
parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected
5
nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the
system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put
into comparison the results obtained using other classic and newly proposed matrices, including
? and regularized and
Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I ? A,
T
?
normalized Laplacian (R.N. Laplacian) LA = A ? ?11 , with a optimized regularization ? (we
have scanned the whole range of ?, and chosen an optimal one that gives the largest overlap, i.e.
fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the
noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All
other matrices fail in detecting the community structure with > 0.15.
We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed
in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral
methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block
model, we have extensively evaluated our method on networks generated by the degree-corrected
stochastic block model [12], and the stochastic block model with extensive triangles. We basically
obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art
spectral methods for the dataset. The figures and detailed results can be found at the SI text.
We have also tested real-world networks with an expert division, and found that although the expert
division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral
clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while
the X-Laplacian gives only 50 mis-classified labels.
1
0.8
Detectability transition
0.7
0.6
0.5
0
Adjacency
R. N. Adjacency
N. Laplacian
Nonbacktracking
Bethe Hessian
X?Laplacian
0.9
Overlap
0.9
Overlap
1
Adjacency
Non?backtracking
X?Laplacian
0.8
Detectability transition
0.7
0.6
0.1
0.2
0.5
0
0.3
?
0.1
0.2
0.3
?
Figure 3: Accuracy of community detection, represented by overlap (fraction of correctly reconstructed labels) between inferred partition and the planted partition, for several methods on networks
generated by the stochastic block model with average degree c = 3 (left) and with extra 10 size-10
cliques (right). All networks has n = 10000 nodes and q = 2 groups, = cout /cin . The black dashed
lines denote the theoretical detectability transition. Each data point is averaged over 20 realizations.
4.2
Clustering from sparse pairwise measurements
Consider the problem of grouping n items into clusters based on the similarity matrix S ? Rn?n ,
where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise
similarities, but only O(n) random samples of them. In other words, the similarity graph which
encodes the information of the global clustering structure is sparse, rather than the complete graph.
There are many motivations for choosing such sparse observations, for example in some cases all
measurements are simply not available or even can not be stored.
In this section we use the generative model recently proposed in [26], since there is a theoretical
limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem
with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti } ? {1, 2}n , then
generates similarity between a randomly sampled pairs of items according to probability distribution,
pin and pout , associated with membership of two items. There is a theoretical limit c? satisfying
R
2
1
1
in (s)?pout (s))
ds pin(p(s)+(q?1)p
, that with c < c? no algorithm could obtain any partial information of
c? = q
out (s)
the planted clusters; while with c > c? some algorithms, e.g. spectral clustering using the Bethe
Hessian [26], achieve partial recovery of the planted clusters.
6
Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced
by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem,
and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap,
the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity
matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian
with unit variance and mean 0.75 and ?0.75 respectively. In the left panel of Fig. 4 the topology
of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian
has fixed the localization problem of directly using of the measurement matrix, and works almost
as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e.
parameters of distributions pin and pout ), while the X-Laplacian does not use them at all.
In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local
structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each
other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain
information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy
local structures and fails to work, while X-Laplacian solves the localization problems induced by
sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or
hubs, and obtained similar results (see SI text).
0.9
Pairwise measurement matrix
Bethe Hessian
X?Laplacian
0.9
0.8
Overlap
Overlap
0.8
Detectability transition
0.7
0.6
0.5
1
Pairwise measurement matrix
Bethe Hessian
X?Laplacian
Detectability transition
0.7
0.6
2
3
c
4
5
0.5
1
6
2
3
c
4
5
6
Figure 4: Spectral clustering using sparse pairwise measurements. The X-axis denotes the average
number of pairwise measurements per data point, and the Y-axis is the fraction of correctly reconstructed labels, maximized over permutations. The model used to generate pairwise measurements
is proposed in [26], see text for detailed descriptions. In the left panel, the topologies of the pairwise measurements are random graphs. In the right panel in addition to the random graph topology
there are 20 randomly selected nodes with all their neighbors connected. Each point in the figure is
averaged over 20 realizations of size 104 .
4.3
Rank estimation and Matrix Completion
The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank
matrix from few entries. This problem has many applications including the famous collaborative
filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed
estimating rank of the matrix is usually the first step before actually doing the matrix completion.
true
T
n?r
m?r
The problem is defined as follows:
and V ? R
are chosen
? let A = U V , where U ? R
?
uniformly at random and r nm is the ground-true rank. Only few, say c mn, entries of
matrix Atrue are revealed. That is we are given a matrix A ? Rn?m who contains only subset of
Atrue , with other elements being zero. Many algorithms have been proposed for matrix completion,
including nuclear norm minimization [5] and methods based on the singular value decomposition [4]
etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually
introduced to control the localizations of singular vectors and to estimate the rank using the gap of
singular values [14]. Analogous to the community detection problem, trimming is not supposed to
work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based
on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse
random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe
Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD.
7
However, we see that if the topology is not locally-tree-like but with some noise, for example with
some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse,
reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left
panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed
matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there
is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues.
In this
case since
matrix A could be a non-squared matrix, we need to define the X-Laplacian as
0 A
LX =
? X. The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly
A 0
that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank
can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14].
After estimating the rank of the matrix, matrix completion is done by using a local optimization
algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean
square error (RMSE) is smaller than 10?7 as a function of average number of revealed entries per
row c, for the ER random-graph topology plus noise represented by several cliques. We can see that
X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ? 13. Moreover, when c ? 18,
for all instances, only X-Laplacian gives an accurate completion for all instances.
0.8
Trimming SVD
Bethe Hessian
X?Laplacian
?7
P(RMSE<10 )
15
Eigenvalues
1
Trimming
Bethe Hessian
X?Laplacian
20
10
5
0.6
0.4
0.2
0
0
0
10
20
30
40
50
5
10
15
c
20
25
30
Figure 5: (Left:) Singular values of sparse data matrix with trimming, eigenvalues of the Bethe
Hessian and X-Laplacian. The data matrix is the outer product of two vectors of size 1000. Their
entries are Gaussian random variables with mean zero and unit variance, so the rank of the original
matrix is 2. The topology of revealed observations are random graphs with average degree c = 8
plus 10 random cliques of size 20. (Right:) Fraction of samples that RMSE is smaller than 10?7 ,
among 100 samples of rank-3 data matrix U V T of size 1000 ? 1000, with the entries of U and V
drawn from a Gaussian distribution of mean 0 and unit variance. The topology of revealed entries is
the random graph with varying average degree c plus 10 size-20 cliques.
5
Conclusion and discussion
We have presented the X-Laplacian, a general approach for detecting latent global structure in a
given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The
mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated
using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems
in the sparse regime and with noise.
In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but
we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested
? and found they work as
approaches using various variants of A, such as normalized data matrix A,
well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing
Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of
regularization-learning is a general spectral approach for hard inference problems.
A (Matlab) demo of our method can be found at http://panzhang.net.
8
References
[1] L. A. Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they blog. In
Proceedings of the 3rd international workshop on Link discovery, pages 36?43. ACM, 2005.
[2] A. A. Amini, A. Chen, P. J. Bickel, E. Levina, et al. Pseudo-likelihood methods for community detection
in large sparse networks. The Annals of Statistics, 41(4):2097?2122, 2013.
[3] R. Bell and P. Dean. Atomic vibrations in vitreous silica. Discussions of the Faraday society, 50:55?61,
1970.
[4] J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 20(4):1956?1982, 2010.
[5] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009.
[6] A. COJA-OGHLAN. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability
and Computing, 19:227?284, 3 2010.
[7] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Asymptotic analysis of the stochastic block model
for modular networks and its algorithmic applications. Phys. Rev. E, 84:066106, Dec 2011.
[8] K.-i. Hashimoto. Zeta functions of finite graphs and representations of p-adic groups. Advanced Studies
in Pure Mathematics, 15:211?280, 1989.
[9] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social networks,
5(2):109?137, 1983.
[10] A. Javanmard, A. Montanari, and F. Ricci-Tersenghi. Phase transitions in semidefinite relaxations. Proceedings of the National Academy of Sciences, 113(16):E2218, 2016.
[11] A. Joseph and B. Yu. Impact of regularization on spectral clustering. arXiv preprint arXiv:1312.1733,
2013.
[12] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys.
Rev. E, 83:016107, Jan 2011.
[13] R. H. Keshavan, A. Montanari, and S. Oh. Low-rank matrix completion with noisy observations: a
quantitative comparison. In Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual
Allerton Conference on, pages 1216?1222. IEEE, 2009.
[14] R. H. Keshavan, S. Oh, and A. Montanari. Matrix completion from a few entries. In Information Theory,
2009. ISIT 2009. IEEE International Symposium on, pages 324?328. IEEE, 2009.
[15] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborov?a, and P. Zhang. Spectral redemption
in clustering sparse networks. Proc. Natl. Acad. Sci. USA, 110(52):20935?20940, 2013.
[16] C. M. Le, E. Levina, and R. Vershynin. Sparse random graphs: regularization and concentration of the
laplacian. arXiv preprint arXiv:1502.03049, 2015.
[17] C. M. Le and R. Vershynin. Concentration and regularization of random graphs. arXiv preprint
arXiv:1506.00669, 2015.
[18] J. Lei, A. Rinaldo, et al. Consistency of spectral clustering in stochastic block models. The Annals of
Statistics, 43(1):215?237, 2014.
[19] U. V. Luxburg, M. Belkin, O. Bousquet, and Pertinence. A tutorial on spectral clustering. Stat. Comput,
2007.
[20] L. Massouli?e. Community detection thresholds and the weak ramanujan property. In Proceedings of the
46th Annual ACM Symposium on Theory of Computing, pages 694?703. ACM, 2014.
[21] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. arXiv preprint
arXiv:1202.1499, 2012.
[22] A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. Advances in
neural information processing systems, 2:849?856, 2002.
[23] T. Qin and K. Rohe. Regularized spectral clustering under the degree-corrected stochastic blockmodel.
In Advances in Neural Information Processing Systems, pages 3120?3128, 2013.
[24] A. Saade, F. Krzakala, and L. Zdeborov?a. Spectral clustering of graphs with the bethe hessian. In
Advances in Neural Information Processing Systems, pages 406?414, 2014.
[25] A. Saade, F. Krzakala, and L. Zdeborov?a. Matrix completion from fewer entries: Spectral detectability
and rank estimation. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances
in Neural Information Processing Systems 28, pages 1261?1269. Curran Associates, Inc., 2015.
[26] A. Saade, M. Lelarge, F. Krzakala, and L. Zdeborov?a. Clustering from sparse pairwise measurements. To
appear in IEEE International Symposium on Information Theory (ISIT). IEEE, arXiv:1601.06683, 2016.
[27] S.G.Johnson. The nlopt nonlinear-optimization package, 2014.
9
| 6491 |@word illustrating:1 norm:1 tried:1 decomposition:2 asks:1 carry:1 initial:1 contains:3 selecting:1 itp:1 denoting:3 neeman:2 suppressing:1 outperforms:3 existing:3 si:5 written:2 numerical:3 partition:11 informative:8 remove:1 plot:5 update:1 v:1 generative:1 selected:3 fewer:1 item:5 short:1 detecting:3 completeness:1 node:14 contribute:1 lx:11 allerton:2 zhang:2 direct:1 become:1 symposium:3 ik:1 krzakala:5 introduce:1 pairwise:15 javanmard:1 indeed:2 roughly:1 cand:2 frequently:1 mechanic:1 election:1 totally:1 spain:1 xx:1 underlying:5 moreover:6 panel:10 begin:1 estimating:2 kind:6 eigenvector:16 unified:1 pseudo:1 quantitative:1 act:1 ti:1 rm:1 wrong:1 control:2 unit:3 partitioning:1 appear:1 before:3 positive:4 understood:1 local:8 treat:1 decelle:1 limit:5 acad:1 ipr:2 black:1 plus:3 china:1 studied:2 ease:1 range:6 averaged:2 atomic:1 practice:2 block:14 procedure:2 jan:1 semicircle:3 significantly:1 bell:1 word:1 operator:2 put:3 context:1 applying:2 equivalent:1 dean:1 missing:2 ramanujan:1 straightforward:3 regardless:1 go:1 independently:1 starting:1 convex:1 shen:1 recovery:1 assigns:1 nonbacktracking:2 pure:1 rule:1 sbm:2 nuclear:1 oh:2 classic:3 jli:1 analogous:2 annals:2 controlling:1 exact:1 lighter:1 distinguishing:1 curran:1 associate:1 element:1 satisfying:1 continues:1 observed:1 preprint:4 improper:1 connected:1 decrease:1 removed:2 cin:3 redemption:1 ui:6 rigorously:1 neglected:1 algebra:1 nlopt:1 localization:22 division:2 completely:1 triangle:2 hashimoto:1 represented:4 various:2 talk:2 derivation:3 separated:1 enyi:1 distinct:2 effective:1 repairing:1 newman:1 choosing:1 quite:1 whose:1 larger:2 solve:6 modular:1 say:2 otherwise:2 pab:3 statistic:2 think:1 noisy:7 eigenvalue:32 net:1 cai:1 propose:1 reconstruction:2 leinhardt:1 product:1 qin:1 loop:1 realization:2 poorly:2 achieve:1 academy:2 supposed:3 description:1 validate:2 convergence:1 cluster:5 illustrate:3 ac:1 stat:1 completion:16 eq:2 solves:3 throw:1 indicate:1 quantify:1 drawback:1 closely:1 correct:1 stochastic:17 enable:1 adjacency:15 argued:1 ricci:1 isit:2 anticipate:1 strictly:2 sufficiently:1 ground:1 uil:1 lawrence:1 atrue:2 algorithmic:1 bickel:1 consecutive:1 estimation:6 proc:1 label:8 sensitive:2 vibration:1 largest:8 create:1 successfully:1 minimization:1 ujl:2 clearly:1 always:1 gaussian:3 rather:5 pn:2 varying:1 validated:1 vk:1 rank:28 indicates:2 check:1 likelihood:1 contrast:1 political:2 blockmodel:1 detect:2 glass:1 inference:9 membership:3 vl:1 hidden:3 relation:2 among:4 ill:1 k6:1 art:3 equal:1 ng:1 represents:1 yu:1 few:10 belkin:1 modern:1 randomly:4 national:1 phase:1 n1:1 detection:16 trimming:14 highly:1 investigate:1 evaluation:1 analyzed:1 semidefinite:1 natl:1 accurate:2 edge:6 partial:2 orthogonal:1 tree:3 walk:1 plotted:1 theoretical:7 instance:3 column:4 localizing:2 karrer:1 addressing:1 entry:26 vertex:1 pout:4 subset:1 successful:1 johnson:1 optimally:1 stored:1 reported:1 synthetic:2 vershynin:2 recht:1 density:5 international:3 siam:1 lee:1 physic:2 connecting:3 zeta:2 together:1 continuously:1 connectivity:1 squared:1 again:1 thesis:1 nm:1 choose:1 argmaxu:1 worse:2 corner:1 lii:1 expert:2 leading:6 return:1 de:2 lsym:1 sec:3 inc:1 combinatorics:1 explicitly:1 ad:1 vi:1 view:2 lot:1 root:1 doing:2 red:2 wave:1 recover:1 sort:1 rmse:3 contribution:1 collaborative:1 square:1 spin:1 il:1 accuracy:2 variance:6 who:2 adic:1 ensemble:1 maximized:1 identify:1 weak:1 famous:2 basically:1 none:1 lu:1 j6:4 classified:2 influenced:1 phys:2 lelarge:1 energy:1 naturally:1 associated:6 mi:2 recovers:1 sampled:1 newly:1 dataset:3 tunable:1 popular:4 color:5 ut:1 dimensionality:2 improves:1 oghlan:1 actually:3 wei:1 erd:1 evaluated:1 though:1 shrink:1 generality:2 done:1 sly:2 until:1 d:1 adamic:1 keshavan:2 o:1 nonlinear:1 propagation:1 google:1 glance:1 reveal:3 laskey:1 lei:1 usa:1 effect:4 normalized:5 contain:1 true:3 regularization:38 hence:1 symmetric:2 moore:2 satisfactory:1 illustrated:2 during:4 covering:1 cout:3 complete:1 demonstrate:1 performs:2 dedicated:1 wigner:1 recently:3 empirically:1 tail:1 measurement:13 rd:1 consistency:1 mathematics:2 pointed:1 sugiyama:1 similarity:8 etc:1 add:2 recent:1 driven:1 blog:2 continue:1 seen:4 additional:1 care:1 determine:1 v3:1 signal:3 dashed:1 resolving:1 infer:1 exceeds:1 levina:2 cross:2 long:1 divided:1 laplacian:40 impact:1 variant:3 vl2:1 essentially:2 arxiv:9 represent:2 dec:1 penalize:1 addition:4 decreased:1 singular:14 float:1 leaving:1 extra:2 induced:4 jordan:1 call:1 revealed:7 saade:3 enough:2 easy:3 ujk:2 identified:4 topology:9 reduce:1 idea:2 cn:1 det:1 whether:1 expression:1 trimmed:2 penalty:2 hessian:28 matlab:1 detailed:3 eigenvectors:43 clear:1 amount:3 locally:2 extensively:1 concentrated:1 generate:2 http:1 dangling:1 tutorial:1 estimated:3 correctly:4 per:2 bulk:7 xii:2 detectability:9 group:16 key:1 threshold:3 delocalized:1 drawn:1 penalizing:1 graph:28 relaxation:1 fraction:5 year:1 beijing:1 sum:1 rtrue:1 package:1 inverse:7 luxburg:1 massouli:1 reporting:1 throughout:1 almost:1 pushed:1 quadratic:1 annual:2 strength:1 scanned:1 encodes:2 bousquet:1 generates:1 u1:1 argument:1 according:3 poor:1 smaller:4 pan:1 joseph:1 rev:2 making:1 gradually:1 sij:1 equation:2 discus:3 pin:4 fail:2 know:1 available:1 away:1 spectral:36 v2:1 amini:1 appearing:1 uq:1 original:2 top:3 clustering:21 denotes:1 maintaining:1 giving:2 chinese:1 especially:1 uj:2 society:1 comparatively:1 added:2 planted:10 concentration:2 diagonal:5 pain:1 zdeborov:5 pertinence:1 link:2 separate:1 sci:1 outer:1 reason:2 index:1 ratio:7 minimizing:1 negative:3 suppress:1 proper:1 perform:2 coja:1 observation:3 datasets:2 finite:1 displayed:1 situation:1 extended:1 looking:1 precise:1 communication:1 rn:2 perturbation:7 community:18 inferred:1 introduced:1 pair:1 extensive:4 optimized:1 learned:4 established:1 barcelona:1 nip:1 beyond:1 suggested:1 usually:7 below:1 laplacians:1 sparsity:4 regime:4 including:5 vi2:2 belief:1 overlap:7 natural:1 regularized:3 participation:7 advanced:1 mn:2 representing:1 scheme:1 mossel:2 axis:2 deviate:1 text:6 understanding:2 discovery:1 asymptotic:1 loss:2 permutation:1 mixed:1 proportional:1 filtering:1 teleportation:1 localized:20 foundation:1 degree:16 thresholding:1 editor:1 row:5 last:3 free:1 aij:2 side:1 institute:1 fall:1 neighbor:2 taking:1 sparse:29 distributed:1 default:1 transition:12 world:1 evaluating:1 author:2 commonly:1 qualitatively:1 adaptive:1 social:1 correlate:2 reconstructed:4 clique:10 global:11 conclude:1 demo:1 spectrum:4 latent:1 bethe:29 learn:1 robust:3 necessarily:1 garnett:1 blockmodels:2 dense:1 main:2 montanari:3 arrow:2 whole:1 noise:16 motivation:1 complementary:1 fig:13 vi4:2 uik:2 slow:1 sub:1 position:1 fails:1 comput:1 third:4 learns:1 down:8 rohe:1 specific:2 hub:1 er:4 cortes:1 grouping:1 workshop:1 adding:1 conditioned:1 gap:5 chen:1 backtracking:13 simply:2 blogosphere:1 rinaldo:1 u2:3 holland:1 corresponds:1 faraday:1 satisfies:2 tersenghi:1 acm:3 succeed:1 identity:1 hard:2 change:4 corrected:2 uniformly:1 acting:1 called:2 svd:7 la:1 succeeds:1 e:2 indicating:2 arises:1 evaluate:2 tested:4 correlated:1 |
6,071 | 6,492 | Quantized Random Projections and Non-Linear
Estimation of Cosine Similarity
Ping Li
Rutgers University
Michael Mitzenmacher
Harvard University
Martin Slawski
Rutgers University
pingli@stat.rutgers.edu
michaelm@eecs.harvard.edu
martin.slawski@rutgers.edu
Abstract
Random projections constitute a simple, yet effective technique for dimensionality
reduction with applications in learning and search problems. In the present paper,
we consider the problem of estimating cosine similarities when the projected
data undergo scalar quantization to b bits. We here argue that the maximum
likelihood estimator (MLE) is a principled approach to deal with the non-linearity
resulting from quantization, and subsequently study its computational and statistical
properties. A specific focus is on the on the trade-off between bit depth and the
number of projections given a fixed budget of bits for storage or transmission.
Along the way, we also touch upon the existence of a qualitative counterpart to the
Johnson-Lindenstrauss lemma in the presence of quantization.
1
Introduction
The method of random projections (RPs) is an important approach to linear dimensionality reduction [23]. RPs have established themselves as an alternative to principal components analysis which
is computationally more demanding. Instead of determining an optimal low-dimensional subspace
via a singular value decomposition, the data are projected on a subspace spanned by a set of directions
picked at random (e.g. by sampling from the Gaussian distribution). Despite its simplicity, this
approach comes with a theoretical guarantee: as asserted by the celebrated Johnson-Lindenstrauss
(J-L) lemma [6, 12], k = O(log n/?2 ) random directions are enough to preserve the squared distances
between all pairs from a data set of size n up to a relative error of ?, irrespective of the dimension d the
data set resides in originally. Inner products are preserved similarly. As a consequence, procedures
only requiring distances or inner products can be approximated in the lower-dimensional space,
thereby achieving substantial reductions in terms of computation and storage, or mitigating the curse
of dimensionality. The idea of RPs has thus been employed in linear learning [7, 19], fast matrix
factorization [24], similarity search [1, 9], clustering [2, 5], statistical testing [18, 22], etc.
The idea of data compression by RPs has been extended to the case where the projected data are
additionally quantized to b bits so as to achieve further reductions in data storage and transmission.
The extreme case of b = 1 is well-studied in the context of locality sensitive hashing [4]. More
recently, b-bit quantized random projections for b ? 1 have been considered from different perspectives. The paper [17] studies Hamming distance-based estimation of cosine similarity and linear
classification when using a coding scheme that maps a real value to a binary vector of length 2b . It
is demonstrated that for similarity estimation, taking b > 1 may yield improvements if the target
similarity is high. The paper [10] is dedicated to J-L-type results for quantized RPs, considerably
improving over an earlier result of the same flavor in [15]. The work [15] also discusses the trade-off
between the number of projections k and number of bits b per projection under a given budget of bits
as it also appears in the literature on quantized compressed sensing [11, 14].
In the present paper, all of these aspects and some more are studied for an approach that can be
substantially more accurate for small b (specifically, we focus on 1 ? b ? 6) than those in [10, 17, 15].
In [10, 15] the non-linearity of quantization is ignored by treating the quantized data as if they had
been observed directly. Such ?linear? approach benefits from its simplicity, but it is geared towards
fine quantization, whereas for small b the bias resulting from quantization dominates. By contrast,
the approach proposed herein makes full use of the knowledge about the quantizer. As in [17] we
suppose that the original data set is contained in the unit sphere of Rd , or at least that the Euclidean
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
norms of the data points are given. In this case, approximating distances boils down to estimating
inner products (or cosine similarity) which can be done by maximum likelihood (ML) estimation
based on the quantized data. Several questions of interest can be addressed by considering the Fisher
information of the maximum likelihood estimator (MLE). With regard to the aforementioned trade-off
between k and b, it turns out that the choice b = 1 is optimal (in the sense of yielding maximum
Fisher information) as long as the underlying similarity is smaller than 0.2; as the latter increases, the
more effective it becomes to increase b. By considering the rate of growth of the Fisher information
near the maximum similarity of one, we discover a gap between the finite bit and infinite bit case with
rates of ?((1 ? ?? )?3/2 ) and ?((1 ? ?? )?2 ), respectively, where ?? denotes the target similarity.
As an implication, an exact equivalent of the J-L lemma does not exist in the finite bit case.
The MLE under study does not have a closed form solution. We show that it is possible to approximate
the MLE by a non-iterative scheme only requiring pre-computed look-up tables. Derivation of this
scheme lets us draw connections to alternatives like the Hamming distance-based estimator in [17].
We present experimental results concerning applications of the proposed approach in nearest neighbor
search and linear classification. In nearest neighbor search, we focus on the high similarity regime and
confirm theoretical insights into the trade-off between k and b. For linear classification, we observe
empirically that intermediate values of b can yield better trade-offs than single-bit quantization.
Notation. We let [d] = {1, . . . , d}. I(P ) denotes the indicator function of expression P . For a
function f (?), we use f?(?) and f?(?) for its first resp. second derivative. P? and E? denote probability/expectation w.r.t. a zero mean, unit variance bivariate normal distribution with correlation ?.
Supplement: Proofs and additional experimental results can be found in the supplement.
2
Quantized random projections, properties of the MLE, and implications
We start by formally introducing the setup, the problem and the approach that is taken before
discussing properties of the MLE in this specific case, along with important implications.
Setup. Let X = {x1 , . . . , xn } ? Sd?1 , where Sd?1 := {x ? Rd : kxk2 = 1} denotes the unit
sphere in Rd , be a set of data points. We think of d being large. As discussed below, the requirement
of having all data points normalized to unit norm is not necessary, but it simplifies our exposition
considerably. Let x, x0 be a generic pair of elements from X and let ?? = hx, x0 i denote their inner
product. Alternatively, we may refer to ?? as (cosine) similarity or correlation. Again for simplicity,
we assume that 0 ? ?? < 1; the case of negative ?? is a trivial extension because of symmetry.
We aim at reducing the dimensionality of the given data set by means of a random projection, which
is realized by sampling a random matrix A of dimension k by d whose entries are i.i.d. N (0, 1)
(i.e., zero-mean Gaussian with unit variance). Applying A to X yields Z = {zi }ni=1 ? Rk with
zi = Axi , i ? [n]. Subsequently, the projected data points {zi }ni=1 are subject to scalar quantization.
A b-bit scalar quantizer is parameterized by 1) thresholds t = (t1 , . . . , tK?1 ) with 0 = t0 < t1 <
. . . < tK?1 < tK = +? inducing a partitioning of the positive real line into K = 2b?1 intervals
{[tr?1 , tr ), r ? [K]} and 2) a codebook M = {?1 , . . . , ?K } with code ?r representing interval
[tr?1 , tr ), r ? [K]. Given t and M, the scalar quantizer (or quantization map) is defined by
Q : R ? M? := ?M ? M,
z 7? Q(z) = sign(z)
K
X
?r I(|z| ? [tr?1 , tr ))
(1)
r=1
The projected, b-bit quantized data result as Q = {qi }ni=1 ? (M? )k , qi = ( Q(zij ) )kj=1 , i ? [n].
Problem statement. Let z, z 0 and q, q 0 denote the pairs corresponding to x, x0 in Z respectively
Q. The goal is to estimate ?? = hx, x0 i from q, q 0 which automatically yields an estimate of
kx ? x0 k22 = 2(1 ? ?? ). If z, z 0 were given, it would be standard to use k1 hz, z 0 i as an unbiased
estimator of ?? . This "linear" approach is commonly adopted when the data undergo uniform
quantization with saturation level T (i.e., tr = T ? r/(K ? 1), ?r = (tr ? tr?1 )/2, r ? [K ? 1],
?K = T ), based on the rationale that as b ? ?, k1 hq, q 0 i ? k1 hz, z 0 i which in turn is sharply
concentrated around its expectation ?? .
There are two major concerns about this approach. First, for finite b the estimator k1 hq, q 0 i has a
bias resulting from the non-linearity of Q that does not vanish as k ? ?. For small b, the effect of
this bias is particularly pronounced. Lloyd-Max quantization (see Proposition 1 below) in place of
2
1.4
p6
p5
p2
0
p1 = P? (Z ? (0, t1 ], Z ? (0, t1 ])
p5
p4
p1
p2
p2
p1
p4
p5
p2 = P? (Z ? (0, t1 ], Z 0 ? (t1 , ?))
p3 = P? (Z ? (t1 , ?), Z 0 ? (t1 , ?))
p3
p2
p5
b=3
1.2
p3
p4 = P?? (Z ? (0, t1 ], Z 0 ? (0, t1 ])
p5 = P?? (Z ? (0, t1 ], Z 0 ? (t1 , ?))
p6
p6 = P?? (Z ? (t1 , ?), Z 0 ? (t1 , ?))
1
empirical MSE
I?1(?)
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
?
Figure 1: (L, M): Partitioning into cells for b = 2 and cell probabilities. (R): Empirical MSE
k(b
?MLE ? ?? )2 for b = 3 (averaged over 104 i.i.d. data sets with k = 100) compared to the inverse
information. The disagreement for ? ? 0.2 results from positive truncation of the MLE at zero.
uniform quantization provides some remedy, but the issue of non-vanishing bias remains. Second,
even for infinite b, the approach is statistically not efficient. In order to see this, note that
1 ??
i.i.d.
{(zj , zj0 )}kj=1 ? (Z, Z 0 ), where (Z, Z 0 ) ? N2 0,
.
(2)
?? 1
It is shown in [16] that the MLE of ?? under the above bivariate normal model has a variance of
(1 ? ?2? )2 /{k (1 + ?2? )}, while Var(hz, z 0 i /k) = (1 + ?2? )/k which is a substantial difference for
large ?? . The higher variance results from not using the information that the components of z and z 0
have unit variance [16]. In conclusion, the linear approach as outlined above suffers from noticeable
bias/and or high variance if the similarity ?? is high, and it thus makes sense to study alternatives.
Maximum likelihood estimation of ?? . We here propose the MLE in place of the linear approach.
The advantage of the MLE is that it can have substantially better statistical performance as the
quantization map is explicitly taken into account. The MLE is based on bivariate normality according
to (2). The effect of quantization is identical to that of what is known as interval censoring in statistics,
i.e., in place of observing a specific value, one only observes that the datum is contained in an interval.
The concept is easiest to understand in the case of one-bit quantization. For any j ? [k], each of
the four possible outcomes of (qj , qj0 ) corresponds to one of the four orthants of R2 . By symmetry,
the probability of (qj , qj0 ) falling into the positive or into the negative orthant are identical; both
correspond to a ?collision?, i.e., to the event {qj = qj0 }. Likewise, the probability of (qj , qj0 ) falling
into one of the remaining two orthants are identical, corresponding to a disagreement {qj 6= qj0 }.
Accordingly, the likelihood function in ? is given by
k
Y
0
0
{?(?)I(qj =qj ) (1 ? ?(?))I(qj 6=qj ) },
?(?) := P? (sign(Z) = sign(Z 0 )),
j=1
where ?(?) denotes the probability of a collision after quantization for (Z, Z 0 ) as in (2) with ??
replaced by ?. It is straightforward to show that the MLE is given by ?bMLE = cos(?(1 ? ?
b)), where
Pk
? is the circle constant and ?
b = k ?1 j=1 I(qj = qj0 ) is the empirical counterpart to ?(?). We note
that the expression for ?bMLE follows the same rationale as used for the simhash in [4].
With these preparations, it is not hard to see how the MLE generalizes to cases with more than one
bit. For b = 2, there is a single non-trivial threshold t1 that yields a partitioning of the real axis into
four bins and accordingly a component (qj , qj0 ) of a quantized pair can fall into 16 possible cells
(rectangles), cf. Figure 1. By orthant symmetry and symmetries within each orthant, one ends up
with six distinct probabilities p1 , . . . , p6 for (qj , qj0 ) falling into one of those cells depending on ?.
Weighting those probabilities according to the number of their occurrences in the left part of Figure 1,
we end up with probabilities ?1 = ?1 (?), . . . , ?6 = ?6 (?) that sum up to one. The corresponding
relative cell frequencies ?
b1 , . . . , ?
b6 resulting from (qj , qj0 )kj=1 form a sufficient statistic for ?. For
2b
general b, we have 2 cells and L = K(K + 1) (recall that K = 2b?1 ) distinct probabilities, so
that L = 20, 72, 272, 1056 for b = 3, . . . , 6. This yields the following compact expressions for the
3
1.8
2
2.5
b=4
b=2
b=6
1.8
1.6
1.6
1
0.6
0.4
0
b ? Ib?1 (?)/I1?1 (?)
1.2
1.2
0.8
2
1.4
b ? Ib?1 (?)/I1?1 (?)
b ? Ib?1 (?)/I1?1 (?)
1.4
1.5
1
0.8
Lloyd-Max
T0.9
T0.95
T0.99
0.2
Lloyd-Max
T0.9
T0.95
T0.99
0.6
0.4
0.2
0.4
0.6
0.8
1
0
0
?
0.2
1
0.5
0.4
0.6
0.8
1
Lloyd-Max
T0.9
T0.95
T0.99
0
0
0.2
0.4
0.6
?
0.8
1
?
Figure 2: b ? Ib?1 (?)/I1?1 (?) vs. ? for different choices of t: Lloyd-Max and uniform quantization
with saturation levels T0.9 , T0.95 , T0.99 , cf. ?4.1 for a definition. The latter are better suited for high
similarity. The differences become smaller as b increases. Note that for b = 6, ? > 0.7 is required for
either quantization scheme to achieve a better trade-off than the one-bit MLE.
negative log-likelihood l(?) and the Fisher information I(?) = E? [?l(?)] (up to a factor of k)
l(?) =
L
X
?
b` log(?` (?)),
`=1
I(?) =
L
X
(?? ` (?))2
`=1
?` (?)
.
(3)
The information I(?) is of particular interest. By classical statistical theory [21], {E[b
?MLE ] ? ?? }2 =
2
?1
2
?1
2
O(1/k ), Var(b
?MLE ) = I (?)/k, E[(b
?MLE ? ?? ) ] = I (?)/k + O(1/k ) as k ? ?. While
this is an asymptotic result, it agrees to a good extent with what one observes for finite, but not too
small samples, cf. Figure 1. We therefore treat the inverse information as a proxy for the accuracy of
?bMLE in subsequent analysis.
Remark. We here briefly address the case of known, but possibly non-unit norms, i.e., kxk2 = ?x ,
kx0 k2 = ?x0 . This can be handled by re-scaling the thresholds of the quantizer (1) by ?x resp. ?x0 ,
estimating ?? based on q, q 0 as in the unit norm case, and subsequently re-scaling the estimate by
?x ?x0 to obtain an estimate of hx, x0 i. The assumption that the norms are known is not hard to satisfy
in practice as they can be computed by one linear scan during data collection. With a limited bit
budget, the norms additionally need to be quantized. It is unclear how to accurately estimate them
from quantized data (for b = 1, it is definitely impossible).
Choice of the quantizer. Equipped with the Fisher information (3), one of the questions that can
be addressed is quantizer design. Note that as opposed to the linear approach, the specific choice
of the {?r }K
r=1 in (1) is not important as ML estimation only depends on cell frequencies but not
on the values associated with the intervals {(tr?1 , tr ]}K
r=1 . The thresholds t, however, turn out to
have a considerable impact, at least for small b. An optimal set of thresholds can be determined by
minimizing the inverse information I ?1 (?; t) w.r.t. t for fixed ?. As the underlying similarity is not
known, this may not seem practical. On the other hand, prior knowledge about the range of ? may be
available, or the closed form one-bit estimator can be used as pilot estimator. For ? = 0, the optimal
set of thresholds coincide with those of Lloyd-Max quantization [20].
Proposition 1. Let g ? N (0, 1) and consider Lloyd-Max quantization given by
K
2
?
?1
(t? , {??r }K
(0; t).
r=1 ) = argmin E[{g ? Q(g; t, {?r }r=1 )} ]. We also have t = argmin I
t
t,{?r }K
r=1
The Lloyd-Max problem can be solved numerically by means of an alternating scheme which can
be shown to converge to a global optimum [13]. For ? > 0, an optimal set of thresholds can be
determined by general procedures for nonlinear optimization. Evaluation of I ?1 (?; t) requires
computation of the probabilities {?` (?; t)}L
? ` (?; t)}L
`=1 and their derivatives {?
`=1 . The latter are
available in closed form (cf. supplement), while for the former specialized numerical integration
procedures [8] can be used. In order to avoid multi-dimensional optimization, it makes sense to
confine oneself to thresholds of the form tr = T ? r/(K ? 1), r ? [K ? 1], so that only T needs to
be optimized. Even though the Lloyd-Max scheme performs reasonably also for large values of ?,
the one-parameter scheme may still yield significant improvements in that case, cf. Figure 2. Once
b ? 5, the differences between the two schemes become marginal.
Trade-off between k and b. Suppose we are given a fixed budget of bits B = k ? b for transmission
or storage, and we are in free choosing b. The optimal choice of b can be determined by comparing
4
7
0.25
1
2
3
4
5
6
b
b
b
b
b
b
? ? 0.9
0.2
b ? Ib?1 (?)
b ? Ib?1 (?)
5
=
=
=
=
=
=
4
3
0.15
=
=
=
=
=
=
6
1
2
3
4
5
6
5
optimal b
b
b
b
b
b
b
all ?
6
0.1
4
3
2
0.05
2
1
0
0
0.2
0.4
0.6
0.8
1
0
0.9
0.92
0.94
?
0.96
?
0.98
1
1
0
0.2
0.4
?
0.6
0.8
1
b ? Ib?1 (?)
Figure 3: Trade-off between k and b. (L):
vs. ? for 1 ? b ? 6 with t chosen by Lloyd-Max.
(M): Zoom into the range 0.9 ? ? ? 1. (R): choice of b minimizing b ? Ib?1 (?) vs. ?.
the inverse Fisher information Ib?1 (?) for changing b with t chosen according to either of the two
schemes above. Since the mean squared error of ?bMLE decays with 1/k for any b, for b0 with b0 > b
to be more efficient than b at the bit scale it, is required that Ib0 (?)/Ib (?) > b0 /b as with the smaller
choice b one would be allowed to increase k by a factor of b0 /b. Again, this comparison is dependent
on a specific ?. From Figure 3, however, one can draw general conclusions: for ? < 0.2, it does not
pay off to increase b beyond one; as ? increases, higher values of b achieve a better trade-off with
even b = 6 being the optimal choice for ? > 0.98. The intuition is that two points of high similarity
agree on their first significant bit for most coordinates, in which case increasing the number of bits
becomes beneficial. This finding is particularly relevant to (near-)duplicate detection/nearest neighbor
search where high similarities prevail, an application investigated in ?4.
Rate of growth of the Fisher information near ? = 1. Interestingly, we do not observe a ?saturation?
even for b = 6 in the sense that for ? close enough to 1, one can still achieve an improvement
at the bit scale compared to 1 ? b ? 5. This raises the question about the rate of growth of
the Fisher information near one relative to the full precision case (b ? ?). As shown in [16]
I? (?) = (1 + ?2 )/(1 ? ?2 )2 = ?((1 ? ?)?2 ) as ? ? 1. As stated below, in the finite bit case, the
exponent is only 3/2 for all b. This is a noticeable gap.
Theorem 1. For 1 ? b < ?, we have I(?) = ?((1 ? ?)?3/2 ) as ? ? 1.
The theorem has an interesting implication with regard to the existence of a Johnson-Lindenstrauss
(J-L)-type result for quantized random projections. In a nutshell, the J-L lemma states that as long as
k = ?(log n/?2 ), with high probability we have that
(1 ? ?)kxi ? xj k22 ? kzi ? zj k22 /k ? (1 + ?)kxi ? xj k22 for all pairs (i, j),
i.e., the distances of the data in X are preserved in Z up to a relative error of ?. In our setting, one
would hope for an equivalent of the form
2
(1 ? ?)2(1 ? ?ij ) ? 2(1 ? ?bij
MLE ) ? (1 + ?)2(1 ? ?ij ) ?(i, j) as long as k = ?(log n/? ), (4)
where ?ij = hxi , xj i, i, j ? [n], and ?bij
MLE denotes the MLE for ?ij given quantized RPs. The
standard proof of the J-L lemma [6] combines norm preservation for each individual pair of the form
P((1 ? ?)kxi ? xj k22 ? kzi ? zj k22 /k ? (1 + ?)kxi ? xj k22 ) ? 2 exp(?k?(?2 ))
with a union bound. Such a concentration result does not appear to be attainable for ?bMLE ? ?? , not
even asymptotically as k ? ? in which case ?bMLE ? ?? is asymptotically normal with mean zero
and variance I ?1 (?? )/k. This yields an asymptotic tail bound of the form
P(|b
?MLE ? ?? | > ?) ? 2 exp(?? 2 k/{2I ?1 (?? )}).
(5)
For a result of the form (4), which is about relative distance preservation, one would need to choose ?
proportional to ?(1 ? ?? ). In virtue of Theorem 1, I ?1 (?? ) = ?((1 ? ?? )3/2 ) as ?? ? 1 so that with
? chosen in that way the exponent in (5) would vanish as ?? ? 1. By constrast, the required rate of
decay of I ?1 (?? ) is achieved in the full precision case. Given the asymptotic optimality of the MLE
according to the Cramer-Rao lower bound suggests that a qualitative counterpart to the J-L lemma (4)
is out of reach. Weaker versions in which the required lower bound on k would depend inversely on
the minimum distance of points in X are still possible. Similarly, a weaker result of the form
2
2(1 ? ?ij ) ? ? ? 2(1 ? ?bij
MLE ) ? 2(1 ? ?ij ) + ? ?(i, j) as long as k = ?(log n/? ),
is known to hold already in the one-bit case and follows immediately from the closed form expression
of the MLE, Hoeffdings?s inequality, and the union bound; cf. e.g. [10].
5
3
A general class of estimators and approximate MLE computation
A natural concern about the MLE relative to the linear approach is that it requires optimization via an
iterative scheme. The optimization problem is smooth, one-dimensional and over the unit interval,
hence not challenging for modern solvers. However, in applications it is typically required to compute
the MLE many times, hence avoiding an iterative scheme for optimization is worthwhile. In this
section, we introduce an approximation to the MLE that only requires at most two table look-ups.
PL
A general class of estimators. Let ?(?) = (?1 (?), . . . , ?L (?))> , `=1 ?` (?) = 1, be the normalized cell frequencies depending on ? as defined in ?2, let further w ? RL be a fixed vector of weights,
and consider the map ? 7? ?(?; w) := h?(?), wi. If h?(?),
?
wi > 0 uniformly in ? (such w always
exist), ?(?; w) is increasing and has an inverse ??1 (? ; w). We can then consider the estimator
?bw = ??1 (hb
? , wi ; w),
(6)
where we recall that ?
b = (b
?, . . . , ?
bL )> are the empirical cell frequencies given quantized data q, q 0 .
It is easy to see that ?bw is a consistent estimator of ?? : we have ?
b ? ?(?? ) in probability by the
law of large numbers, and ??1 (hb
? , wi ; w) ? ??1 (h?(?? ), wi ; w) = ??1 (?(?? ; w); w) = ?? by
two-fold application of the continuous mapping theorem. By choosing w such that w` = 1 for `
corresponding to cells contained in the positive/negative orthant and w` = ?1 otherwise, ?bw becomes
the one-bit MLE. By choosing w` = 1 for diagonal cells (cf. Figure 1) corresponding to a collision
event {qj = qj0 } and w` = 0 otherwise, we obtain the Hamming distance-based estimator in [17].
Alternatively, we may choose w such that the asymptotic variance of ?bw is minimized.
Theorem 2. For any w s.t. ?(?
? ? )> w 6= 0, we have Var(b
?w ) = V (w; ?? )/k + O(1/k 2 ) as k ? ?,
V (w; ?? ) = (w> ?(?? )w)/{?(?
? ? )> w}2 , ?(?? ) := ?(?? ) ? ?(?? )?(?? )> ,
?
?1
and ?(?? ) := diag( (?` (?? ))L
(?? )?(?
? ? ). Then:
`=1 ). Moreover, let w = ?
argminw V (w; ?? ) = {?(w? + c1), ? 6= 0, c ? R},
and E[(b
?w? ? ?? )2 ] = E[(b
?MLE ? ?? )2 ] + O(1/k 2 ).
V (w? ; ?? ) = I ?1 (?? ),
Theorem 2 yields an expression for the optimal weights w? = ??1 (?? )?(?
? ? ). This optimal choice is
unique up to translation by a multiple of the constant vector 1 and scaling. The estimator ?bw? based
on the choice w = w? achieves asymptotically the same statistical performance as the MLE.
Approximate computation. The estimator ?bw? is not operational as the optimal choice of the
weights depends on the estimand itself. This issue can be dealt with by using a pilot estimator ?b0 like
R1
the one-bit MLE, the Hamming distance-based estimator in [17] or ?b0 = ?bw , where w = 0 w(?) d?
averages the expression w(?) = ??1 (?)?(?)
?
for the optimal weights over ?. Given the pilot
estimator, we may then replace w? by w(b
?0 ) and use ?bw(b?0 ) as a proxy for ?bw? which achieves the
same statistical performance asymptotically.
A second issue is that computation of ?bw (6) entails inversion of the function ?(?; w). The inverse
may not be defined in general, but for the choices of w that we have in mind, this is not a concern
(cf. supplement). Inversion of ?(?; w) can be carried out with tolerance ? by tabulating the function
values on a uniform grid of cardinality d1/?e and performing a table lookup for each query. When
computing ?bw(b?0 ) , the weights depends on the data via the pilot estimator. We thus need to tabulate
w(?) on a grid, too. Accordingly, a whole set of look-up tables is required for function inversion, one
for each set of weights. Given parameters ?, ? > 0, a formal description of our scheme is as follows.
1. Set R = d1/?e, ?r = r/R, r ? [R], and B = d1/?e, ?b = b/B, b ? [B].
2. Tabulate w(?b ), b ? [B], and function values ?(?r ; w(?b )) = hw(?b ), ?(?r )i, r ? [R], b ? [B].
Steps 1. and 2. constitute a one-time pre-processing. Given data q, q 0 , we proceed as follows.
3. Obtain ?
b and the pilot estimator ?b0 = ??1 (hb
? , wi ; w), with w defined in the previous paragraph.
4. Return ?b = ??1 (hb
? , w(e
?0 )i ; w(e
?0 )), where ?e0 is the value closest to ?b0 among the {?b }.
Step 2. requires about C = d1/?e ? d1/?e ? L computations/storage. From experimental results we
find that ? = 10?4 and ? = .02 appear sufficient for practical purposes, which is still manageable
even for b = 6 with L = 1056 cells in which case C ? 5 ? 108 . Again, this cost is occurred
only once independent of the data. The function inversions in steps 3. and 4. are replaced by table
lookups. By organizing computations efficiently, the frequencies ?
b can be obtained from one pass
over (qj ? qj0 ), j ? [k]. Equipped with the look-up tables, estimating the similarity of two points
requires O(k + L + log(1/?)) flops which is only slightly more than a linear scheme with O(k).
6
1
1
1
0.95
0.95
0.9
0.8
0.7
0.6
synthetic, K = 10
0.5
6
7
8
9
10
log2(bits)
b=1
b=2
b=3
b=4
b=5
b=6
b=?
oracle
11
12
0.85
0.8
b=1
b=2
b=3
b=4
b=5
b=6
b=?
oracle
0.75
0.7
0.65
farm, K = 10
13
fraction retrieved
fraction retrieved
fraction retrieved
0.9
0.6
6
7
8
9
10
log2(bits)
11
12
0.9
0.85
b=1
b=2
b=3
b=4
b=5
b=6
b=?
oracle
0.8
0.75
rcv1, K = 10
13
0.7
6
7
8
9
10
log2(bits)
11
12
13
Figure 4: Average fraction of K = 10 nearest neighbors retrieved vs. total # of bits (log2 scale) for
1 ? b ? 6. b = ? (dashed) represents the MLE based on unquantized data, with k as for b = 6. The
oracle curve (dotted) corresponds to b = ? with maximum k (i.e., as for b = 1).
4
Experiments
We here illustrate the approach outlined above in nearest neighbor search and linear classification.
The focus is on the trade-off between b and k, in particular in the presence of high similarity.
4.1 Nearest Neighbor Search
Finding the most similar data points for a given query is a standard task in information retrieval.
Another application is nearest neighbor classification. We here investigate how the performance of
our approach is affected by the choice of k, b and the quantization scheme. Moreover, we compare
to two baseline competitors, the Hamming distance-based approach in [17] and the linear approach
in which the quantized data are treated like the original unquantized data. For the approach in [17],
Pk
similarity of the quantized data is measured in terms of their Hamming distance j=1 I(qj 6= qj0 ).
Synthetic data. We generate k i.i.d. samples of Gaussian data, where each sample X =
(X0 , X1 , . . . , X96 ) is generated as X0 ? N (0, 1), Xj = ?j X0 + (1 ? ?2j )1/2 Zj , 1 ? j ? 96, where
2
the {Zj }96
j=1 are i.i.d. N (0, 1) and independent of X0 . We have E[(X0 ? Xj ) ] = 2(1 ? ?j ), where
?j = min{0.8+(j ?1)0.002, 0.99}, 1 ? j ? 96. The thus generated data subsequently undergo b-bit
quantization, for 1 ? b ? 6. Regarding the number of samples, we let k ? {26 /b, 27 /b, . . . , 213 /b}
which yields bit budgets between 26 and 213 for all b. The goal is to recover the K nearest neighbors
of X0 according to the {?j }, i.e., X96 is the nearest neighbor etc. The purpose of this specific setting
is to mimic the use of quantized random projections in the situation of a query x0 and data points
X = {x1 , . . . , x96 } having cosine similarities {?j }96
j=1 with the query.
Real data. We consider the Farm Ads data set (n = 4, 143, d = 54, 877) from the UCI repository and
the RCV1 data set (n = 20, 242, d = 47, 236) from the LIBSVM webpage [3]. For both data sets,
each instance is normalized to unit norm. As queries we select all data points whose first neighbor
has (cosine) similarity less than 0.999, whose tenth neighbor has similarity at least 0.8 and whose
hundredth neighbor has similarity less than 0.5. These restrictions allow for a more clear presentation
of our results. Prior to nearest neighbor search, b-bit quantized random projections are applied to the
data, where the ranges for b and for the number of projections k is as for the synthetic data.
Quantization. Four different quantization schemes are considered: Lloyd-Max quantization and
thresholds tr = T? ? r/(K ? 1), r ? [K ? 1], where T? is chosen to minimize I ?1 (?); we consider
? ? {0.9, 0.95, 0.99}. For the linear approach, we choose ?r = E[g|g ? (tr?1 , tr )], r ? [K], where
g ? N (0, 1). For our approach and that in [17] the specific choice of the {?r } is not important.
Evaluation. We perform 100 respectively 20 independent replications for synthetic respectively
real data. We then inspect the top K neighbors for K ? {3, 5, 10} returned by the methods under
consideration, and for each K we report the average fraction of true K neighbors that have been
retrieved over 100 respectively 20 replications, where for the real data, we also average over the
chosen queries (366 for farm and 160 for RCV1).
The results of our experiments point to several conclusions that can be summarized as follows.
One-bit quantization is consistently outperformed by higher-bit quantization. The optimal choice of b
depends on the underlying similarities, and interacts with the choice of t. It is an encouraging result
that the performance based on full precision data (with k as for b = 6) can essentially be matched
7
0.95
1
0.9
0.9
1
1
0.9
0.75
fraction retrieved
fraction retrieved
fraction retrieved
0.8
0.8
0.7
farm, b = 4, K = 10
0.6
fraction retrieved
0.95
0.85
0.9
0.85
0.8
0.7
0.6
0.7
0.65
0.6
MLE
Hamming
Linear
farm, b = 2, K = 10
0.4
6
7
8
9
10
log2(bits)
11
12
13
6
7
8
9
10
log2(bits)
11
12
rcv1, b = 4, K = 10
0.8
MLE
Hamming
Linear
0.5
MLE
Hamming
Linear
rcv1, b = 2, K = 10
13
0.75
0.4
6
7
8
9
10
log2(bits)
11
12
MLE
Hamming
Linear
0.5
13
6
7
8
9
10
log2(bits)
11
12
13
Figure 5: Average fraction of K = 10 nearest neighbors retrieved vs. total # of bits (log2 scale) of our
approach (MLE) relative to that based on the Hamming distance and the linear approach for b = 2, 4.
when quantized data is used. For b = 2, the performance of the MLE is only marginally better than
the approach based on the Hamming distance. The superiority of the former becomes apparent once
b ? 4 which is expected since for increasing b the Hamming distance is statistically inefficient as it
only uses the information whether a pair of quantized data agrees/disagrees. Some of these findings
are reflected in Figures 4 and 5. We refer to the supplement for additional figures.
4.2 Linear Classification
We here outline an application to linear classification given features generated by (quantized) random
b=
projections. We aim at reconstructing the original Gram matrix G = (hxi , x0i i)1?i,i0 ?n from G
(b
gii0 ), where for i 6= i0 , gbii0 = ?bMLE (qi , qi0 ) equals the MLE of hxi , x0i i given a quantized data pair
b is subsequently fed into LIBSVM.
qi , qi0 , and gbii0 = 1 else (assuming normalized data). The matrix G
For testing, the inner products between test and training pairs are approximated accordingly.
Setup. We work with the farm data set using the first 3,000 samples for training, and the Arcene
data set from the UCI repository with 100 training and 100 test samples in dimension d = 104 . The
choice of k and b is as in ?4.1; for arcene, the total bit budget is lowered by a factor of 2. We perform
20 independent replications for each combination of k and b. For SVM classification, we consider
logarithmically spaced grids between 10?3 and 103 for the parameter C (cf. LIBSVM manual).
0.85
farm
b=1
b=2
b=3
b=4
b=5
b=6
b=?
oracle
0.8
0.75
0.7
8
9
10
11
log2(bits)
12
0.85
arcene
0.8
b=1
b=2
b=3
b=4
b=5
b=6
b=?
oracle
0.75
0.7
13
7
8
9
10
log2(bits)
11
12
accuracy on test set
0.85
accuracy on test set
accuracy on test set
0.9
0.8
0.75
b=1
b=2
b=3
b=4
b=5
b=6
b=?
oracle
0.7
0.65
arcene, total #bits = 210
0
0.5
1
1.5
log10(C parameter)
2
2.5
Figure 6: (L, M): accuracy vs. bits, optimized over the SVM parameter C. (R) accuracy vs. C for a
fixed # bits. b = ? indicates the performance based on unquantized data with k as for b = 6. The
oracle curve (dotted) corresponds to b = ? with maximum k (i.e., as for b = 1).
Figure 6 (L, M) displays the average accuracy on the test data (after optimizing over C) in dependence
of the bit budget. For the farm Ads data set, b = 2 achieves the best trade-off, followed by b = 1 and
b = 3. For the Arcene data set, b = 3, 4 is optimal. In both cases, it does not pay off to go for b ? 5.
5
Conclusion
In this paper, we bridge the gap between random projections with full precision and random projections quantized to a single bit. While Theorem 1 indicates that an exact counterpart to the J-L
lemma is not attainable, other theoretical and empirical results herein point to the usefulness of the
intermediate cases which give rise to an interesting trade-off that deserves further study in contexts
where random projections can naturally be applied e.g. linear learning, nearest neighbor classification
or clustering. The optimal choice of b eventually depends on the application: increasing b puts an
emphasis on local rather than global similarity preservation.
8
Acknowledgement
The work of Ping Li and Martin Slawski is supported by NSF-Bigdata-1419210 and NSF-III-1360971.
The work of Michael Mitzenmacher is supported by NSF CCF-1535795 and NSF CCF-1320231.
References
[1] E. Bingham and H. Mannila. Random projection in dimensionality reduction: applications to image and
text data. In Conference on Knowledge discovery and Data mining (KDD), pages 245?250, 2001.
[2] C. Boutsidis, A. Zouzias, and P. Drineas. Random Projections for k-means Clustering. In Advances in
Neural Information Processing Systems (NIPS), pages 298?306. 2010.
[3] C-C. Chang and C-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent
Systems and Technology, 2:27:1?27:27, 2011. http://www.csie.ntu.edu.tw/~cjlin/libsvm.
[4] M. Charikar. Similarity estimation techniques from rounding algorithms. In Proceedings of the Symposium
on Theory of Computing (STOC), pages 380?388, 2002.
[5] S. Dasgupta. Learning mixtures of Gaussians. In FOCS, pages 634?644, 1999.
[6] S. Dasgupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures and
Algorithms, 22:60?65, 2003.
[7] D. Fradkin and D. Madigan. Experiments with random projections for machine learning. In Conference on
Knowledge discovery and Data mining (KDD), pages 517?522, 2003.
[8] A. Genz. BVN: A function for computing bivariate normal probabilities. http://www.math.wsu.edu/
faculty/genz/homepage.
[9] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality.
In Proceedings of the Symposium on Theory of Computing (STOC), pages 604?613, 1998.
[10] L. Jacques. A Quantized Johnson-Lindenstrauss Lemma: The Finding of Buffon?s needle. IEEE Transactions on Information Theory, 61:5012?5027, 2015.
[11] L. Jacques, K. Degraux, and C. De Vleeschouwer. Quantized iterative hard thresholding: Bridging 1-bit
and high-resolution quantized compressed sensing. arXiv:1305.1786, 2013.
[12] W. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemporary
Mathematics, pages 189?206, 1984.
[13] J. Kieffer. Uniqueness of locally optimal quantizer for log-concave density and convex error weighting
function. IEEE Transactions on Information Theory, 29:42?47, 1983.
[14] J. Laska and R. Baraniuk. Regime change: Bit-depth versus measurement-rate in compressive sensing.
IEEE Transactions on Signal Processing, 60:3496?3505, 2012.
[15] M. Li, S. Rane, and P. Boufounos. Quantized embeddings of scale-invariant image features for mobile
augmented reality. In International Workshop on Multimedia Signal Processing (MMSP), pages 1?6, 2012.
[16] P. Li, T. Hastie, and K. Church. Improving Random Projections Using Marginal Information. In Annual
Conference on Learning Theory (COLT), pages 635?649, 2006.
[17] P. Li, M. Mitzenmacher, and A. Shrivastava. Coding for Random Projections. In Proceedings of the
International Conference on Machine Learning (ICML), 2014.
[18] M. Lopes, L. Jacob, and M. Wainwright. A More Powerful Two-Sample Test in High Dimensions using
Random Projection. In Advances in Neural Information Processing Systems 24, pages 1206?1214. 2011.
[19] O. Maillard and R. Munos. Compressed least-squares regression. In Advances in Neural Information
Processing Systems (NIPS), pages 1213?1221. 2009.
[20] J. Max. Quantizing for Minimum Distortion. IRE Transactions on Information Theory, 6:7?12, 1960.
[21] L. Shenton and K. Bowman. Higher Moments of a Maximum-likelihood Estimate. Journal of the Royal
Statistical Society, Series B, pages 305?317, 1963.
[22] R. Srivastava, P. Li, and D. Ruppert. RAPTT: An exact two-sample test in high dimensions using random
projections. Journal of Computational and Graphical Statistics, 25(3):954?970, 2016.
[23] S. Vempala. The Random Projection Method. American Mathematical Society, 2005.
[24] F. Wang and P. Li. Efficient nonnegative matrix factorization with random projections. In SDM, pages
281?292, Columbus, Ohio, 2010.
9
| 6492 |@word repository:2 version:1 briefly:1 compression:1 norm:8 inversion:4 manageable:1 faculty:1 decomposition:1 jacob:1 attainable:2 thereby:1 tr:15 moment:1 reduction:5 celebrated:1 series:1 zij:1 tabulate:2 interestingly:1 kx0:1 comparing:1 yet:1 subsequent:1 numerical:1 kdd:2 treating:1 v:7 accordingly:4 vanishing:1 quantizer:7 quantized:28 codebook:1 provides:1 math:1 ire:1 mathematical:1 along:2 bowman:1 become:2 symposium:2 replication:3 qualitative:2 focs:1 combine:1 paragraph:1 introduce:1 x0:16 expected:1 themselves:1 p1:4 multi:1 unquantized:3 automatically:1 encouraging:1 curse:2 equipped:2 considering:2 increasing:4 becomes:4 spain:1 estimating:4 linearity:3 underlying:3 discover:1 notation:1 moreover:2 matched:1 what:2 easiest:1 argmin:2 homepage:1 substantially:2 compressive:1 finding:4 guarantee:1 concave:1 growth:3 nutshell:1 k2:1 partitioning:3 unit:10 appear:2 superiority:1 before:1 t1:15 positive:4 local:1 treat:1 sd:2 consequence:1 despite:1 solver:1 emphasis:1 studied:2 suggests:1 challenging:1 co:1 factorization:2 limited:1 range:3 statistically:2 averaged:1 practical:2 unique:1 testing:2 practice:1 union:2 mannila:1 procedure:3 empirical:5 projection:26 ups:1 pre:2 madigan:1 close:1 needle:1 storage:5 context:2 applying:1 impossible:1 arcene:5 put:1 restriction:1 equivalent:2 map:4 demonstrated:1 www:2 straightforward:1 go:1 convex:1 resolution:1 simplicity:3 constrast:1 immediately:1 estimator:19 insight:1 spanned:1 coordinate:1 resp:2 target:2 suppose:2 exact:3 us:1 harvard:2 element:1 logarithmically:1 approximated:2 particularly:2 observed:1 csie:1 p5:5 solved:1 wang:1 trade:12 contemporary:1 observes:2 principled:1 substantial:2 intuition:1 degraux:1 raise:1 depend:1 upon:1 drineas:1 derivation:1 distinct:2 fast:1 effective:2 query:6 outcome:1 choosing:3 whose:4 apparent:1 distortion:1 otherwise:2 compressed:3 statistic:3 think:1 farm:8 itself:1 indyk:1 slawski:3 advantage:1 sdm:1 quantizing:1 propose:1 product:5 p4:3 argminw:1 relevant:1 uci:2 organizing:1 achieve:4 description:1 inducing:1 pronounced:1 webpage:1 motwani:1 transmission:3 requirement:1 optimum:1 r1:1 tk:3 depending:2 illustrate:1 stat:1 measured:1 x0i:2 ij:6 b0:8 nearest:13 noticeable:2 p2:5 bmle:7 come:1 direction:2 subsequently:5 qi0:2 bin:1 hx:3 ntu:1 proposition:2 elementary:1 extension:2 pl:1 hold:1 around:1 considered:2 confine:1 normal:4 exp:2 cramer:1 mapping:2 major:1 achieves:3 purpose:2 uniqueness:1 estimation:7 outperformed:1 sensitive:1 bridge:1 agrees:2 hope:1 offs:1 gaussian:3 always:1 aim:2 rather:1 avoid:1 mobile:1 focus:4 improvement:3 consistently:1 likelihood:7 indicates:2 orthants:2 contrast:1 baseline:1 sense:4 dependent:1 i0:2 typically:1 i1:4 mitigating:1 issue:3 classification:9 colt:1 aforementioned:1 among:1 exponent:2 integration:1 laska:1 marginal:2 equal:1 once:3 having:2 sampling:2 identical:3 represents:1 look:4 icml:1 mimic:1 minimized:1 report:1 intelligent:1 duplicate:1 modern:1 preserve:1 zoom:1 individual:1 replaced:2 bw:11 detection:1 interest:2 investigate:1 mining:2 evaluation:2 mixture:1 extreme:1 yielding:1 asserted:1 implication:4 accurate:1 rps:6 necessary:1 euclidean:1 circle:1 re:2 e0:1 theoretical:3 instance:1 earlier:1 rao:1 zj0:1 deserves:1 cost:1 introducing:1 entry:1 uniform:4 usefulness:1 rounding:1 johnson:6 too:2 eec:1 kxi:4 considerably:2 synthetic:4 density:1 definitely:1 international:2 off:13 michael:2 squared:2 again:3 opposed:1 choose:3 possibly:1 genz:2 derivative:2 inefficient:1 return:1 american:1 li:7 account:1 de:1 lookup:2 coding:2 lloyd:11 summarized:1 hundredth:1 satisfy:1 explicitly:1 depends:5 ad:2 picked:1 closed:4 observing:1 start:1 recover:1 b6:1 minimize:1 square:1 ni:3 accuracy:7 variance:8 likewise:1 efficiently:1 yield:10 correspond:1 spaced:1 dealt:1 accurately:1 marginally:1 ping:2 reach:1 suffers:1 manual:1 definition:1 competitor:1 boutsidis:1 frequency:5 naturally:1 proof:3 associated:1 hamming:13 boil:1 pilot:5 recall:2 knowledge:4 dimensionality:6 maillard:1 hilbert:1 appears:1 originally:1 hashing:1 higher:4 reflected:1 mitzenmacher:3 done:1 though:1 p6:4 correlation:2 hand:1 x96:3 touch:1 nonlinear:1 columbus:1 tabulating:1 effect:2 k22:7 requiring:2 true:1 normalized:4 counterpart:4 unbiased:1 remedy:1 concept:1 former:2 alternating:1 hence:2 ccf:2 deal:1 during:1 cosine:7 outline:1 performs:1 dedicated:1 image:2 consideration:1 ohio:1 recently:1 specialized:1 empirically:1 rl:1 discussed:1 tail:1 occurred:1 numerically:1 refer:2 significant:2 measurement:1 rd:3 outlined:2 grid:3 similarly:2 mathematics:1 had:1 hxi:3 lowered:1 geared:1 similarity:27 entail:1 qj0:12 etc:2 closest:1 perspective:1 retrieved:10 optimizing:1 inequality:1 binary:1 discussing:1 minimum:2 additional:2 employed:1 zouzias:1 converge:1 dashed:1 preservation:3 signal:2 full:5 multiple:1 smooth:1 sphere:2 long:4 retrieval:1 lin:1 concerning:1 mle:41 qi:4 impact:1 regression:1 essentially:1 expectation:2 rutgers:4 arxiv:1 achieved:1 cell:12 c1:1 preserved:2 whereas:1 kieffer:1 fine:1 addressed:2 interval:6 else:1 singular:1 subject:1 hz:3 undergo:3 seem:1 near:4 presence:2 intermediate:2 iii:1 enough:2 easy:1 hb:4 embeddings:1 xj:7 zi:3 hastie:1 inner:5 idea:2 simplifies:1 regarding:1 oneself:1 qj:16 t0:13 whether:1 expression:6 six:1 handled:1 bridging:1 returned:1 proceed:1 constitute:2 remark:1 ignored:1 collision:3 clear:1 locally:1 concentrated:1 generate:1 http:2 exist:2 zj:5 nsf:4 dotted:2 sign:3 jacques:2 per:1 dasgupta:2 fradkin:1 affected:1 four:4 threshold:9 achieving:1 falling:3 changing:1 libsvm:5 tenth:1 rectangle:1 asymptotically:4 cardinality:1 fraction:10 sum:1 estimand:1 inverse:6 parameterized:1 baraniuk:1 powerful:1 lope:1 place:3 p3:3 draw:2 scaling:3 bit:51 bound:5 pay:2 followed:1 datum:1 display:1 fold:1 oracle:8 annual:1 nonnegative:1 sharply:1 aspect:1 optimality:1 min:1 performing:1 rcv1:5 vempala:1 martin:3 charikar:1 according:5 combination:1 smaller:3 beneficial:1 slightly:1 reconstructing:1 wi:6 tw:1 invariant:1 taken:2 computationally:1 agree:1 remains:1 discus:1 turn:3 eventually:1 cjlin:1 mind:1 fed:1 end:2 adopted:1 generalizes:1 available:2 gaussians:1 observe:2 worthwhile:1 generic:1 disagreement:2 occurrence:1 alternative:3 existence:2 original:3 denotes:5 clustering:3 remaining:1 cf:9 top:1 graphical:1 log2:11 log10:1 k1:4 approximating:1 classical:1 society:2 bl:1 pingli:1 question:3 realized:1 already:1 concentration:1 dependence:1 diagonal:1 interacts:1 unclear:1 rane:1 subspace:2 distance:15 hq:2 argue:1 extent:1 trivial:2 assuming:1 length:1 code:1 minimizing:2 setup:3 statement:1 stoc:2 negative:4 stated:1 rise:1 design:1 perform:2 inspect:1 finite:5 orthant:4 flop:1 extended:1 situation:1 pair:9 required:6 connection:1 optimized:2 herein:2 established:1 barcelona:1 nip:3 address:1 beyond:1 below:3 regime:2 saturation:3 max:12 royal:1 wainwright:1 event:2 demanding:1 natural:1 treated:1 indicator:1 representing:1 scheme:15 normality:1 technology:1 inversely:1 library:1 axis:1 irrespective:1 carried:1 church:1 kj:3 text:1 prior:2 literature:1 disagrees:1 acknowledgement:1 discovery:2 determining:1 relative:7 asymptotic:4 law:1 rationale:2 interesting:2 proportional:1 var:3 versus:1 sufficient:2 proxy:2 consistent:1 thresholding:1 translation:1 censoring:1 supported:2 truncation:1 free:1 bias:5 weaker:2 understand:1 formal:1 allow:1 neighbor:18 simhash:1 taking:1 fall:1 munos:1 benefit:1 regard:2 tolerance:1 depth:2 dimension:5 lindenstrauss:6 xn:1 resides:1 axi:1 curve:2 gram:1 commonly:1 collection:1 projected:5 coincide:1 mmsp:1 kzi:2 transaction:5 approximate:4 compact:1 confirm:1 ml:2 global:2 b1:1 alternatively:2 search:8 iterative:4 continuous:1 bingham:1 table:6 additionally:2 reality:1 reasonably:1 operational:1 symmetry:4 shrivastava:1 improving:2 mse:2 investigated:1 diag:1 pk:2 whole:1 n2:1 allowed:1 x1:3 augmented:1 precision:4 kxk2:2 vanish:2 ib:10 weighting:2 bij:3 hw:1 down:1 rk:1 theorem:8 removing:1 specific:7 sensing:3 r2:1 decay:2 svm:2 virtue:1 dominates:1 bivariate:4 concern:3 workshop:1 quantization:27 prevail:1 supplement:5 budget:7 kx:1 gap:3 flavor:1 locality:1 suited:1 wsu:1 contained:3 scalar:4 chang:1 corresponds:3 acm:1 goal:2 presentation:1 exposition:1 towards:2 replace:1 fisher:8 considerable:1 hard:3 lipschitz:1 change:1 specifically:1 infinite:2 reducing:1 determined:3 uniformly:1 ruppert:1 lemma:8 principal:1 total:4 boufounos:1 pas:1 multimedia:1 experimental:3 formally:1 select:1 support:1 latter:3 scan:1 preparation:1 bigdata:1 d1:5 avoiding:1 srivastava:1 |
6,072 | 6,493 | Adaptive Concentration Inequalities
for Sequential Decision Problems
Shengjia Zhao
Tsinghua University
zhaosj12@stanford.edu
Enze Zhou
Tsinghua University
zhouez_thu_12@126.com
Ashish Sabharwal
Allen Institute for AI
AshishS@allenai.org
Stefano Ermon
Stanford University
ermon@cs.stanford.edu
Abstract
A key challenge in sequential decision problems is to determine how many samples are needed for an agent to make reliable decisions with good probabilistic
guarantees. We introduce Hoeffding-like concentration inequalities that hold for
a random, adaptively chosen number of samples. Our inequalities are tight under
natural assumptions and can greatly simplify the analysis of common sequential
decision problems. In particular, we apply them to sequential hypothesis testing,
best arm identification, and sorting. The resulting algorithms rival or exceed the
state of the art both theoretically and empirically.
1
Introduction
Many problems in artificial intelligence (AI) and machine learning (ML) involve designing agents
that interact with stochastic environments. The environment is typically modeled with a collection
of random variables. A common assumption is that the agent acquires information by observing
samples from these random variables. A key problem is to determine the number of samples that are
required for the agent to make sound inferences and decisions based on the data it has collected.
Many abstract problems fit into this general framework, including sequential hypothesis testing, e.g.,
testing for positiveness of the mean [18, 6], analysis of streaming data [19], best arm identification
for multi-arm bandits (MAB) [1, 5, 13], etc. These problems involve the design of a sequential
algorithm that needs to decide, at each step, either to acquire a new sample, or to terminate and output
a conclusion, e.g., decide whether the mean of a random variable is positive or not. The challenge is
that obtaining too many samples will result in inefficient algorithms, while taking too few might lead
to the wrong decision.
Concentration inequalities such as Hoeffding?s inequality [11], Chernoff bound, and Azuma?s inequality [7, 5] are among the main analytic tools. These inequalities are used to bound the probability of a
large discrepancy between sample and population means, for a fixed number of samples n. An agent
can control its risk by making decisions based on conclusions that hold with high confidence, due to
the unlikely occurrence of large deviations. However, these inequalities only hold for a fixed, constant
number of samples that is decided a-priori. On the other hand, we often want to design agents that
make decisions adaptively based on the data they collect. That is, we would like the number of
samples itself to be a random variable. Traditional concentration inequalities, however, often do
not hold when the number of samples is stochastic. Existing analysis requires ad-hoc strategies to
bypass this issue, such as union bounding the risk over time [18, 17, 13]. These approaches can lead
to suboptimal algorithms.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We introduce Hoeffding-like concentration inequalities that hold for a random, adaptively chosen
number of samples. Interestingly, we can achieve our goal with a small double logarithmic overhead
with respect to the number of samples required for standard Hoeffding inequalities. We also show
that our bounds cannot be improved under some natural restrictions. Even though related inequalities
have been proposed before [15, 2, 3], we show that ours are significantly tighter, and come with
a complete analysis of the fundamental limits involved. Our inequalities are directly applicable to
a number of sequential decision problems. In particular, we use them to design and analyze new
algorithms for sequential hypothesis testing, best arm identification, and sorting. Our algorithms rival
or outperform state-of-the-art techniques both theoretically and empirically.
2
Adaptive Inequalities and Their Properties
We begin with some definitions and notation:
Definition 1. [20] Let X be a zero mean random variable. For any d > 0, we say X is d-subgaussian
if ?r ? R,
2 2
E[erX ] ? ed r /2
Note that a random variable can be subgaussian only if it has zero mean [20]. However, with some
abuse of notation, we say that any random variable X is subgaussian if X ? E[X] is subgaussian.
Many important types of distributions are subgaussian. For example, by Hoeffding?s Lemma [11],
a distribution bounded in an interval of width 2d is d-subgaussian and a Gaussian random variable
N (0, ? 2 ) is ?-subgaussian. Henceforth, we shall assume that the distributions are 1/2-subgaussian.
Any d-subgaussian random variable can be scaled by 1/(2d) to be 1/2-subgaussian
1/2-subgaussian random variable. {X1 , X2 , . . .}
Definition 2 (Problem setup). Let X be a zero
Pmean
n
are i.i.d. random samples of X. Let Sn = i=1 Xi be a random walk. J is a stopping time with
respect to {X1 , X2 , . . .}. We let J take a special value ? where Pr[J = ?] = 1 ? limn?? Pr[J ?
n]. We also let f : N ? R+ be a function that will serve as a boundary for the random walk.
We note that because it is possible for J to be infinity, to simplify notation, what we really mean by
Pr[EJ ], where EJ is some event, is Pr[{J < ?} ? EJ ]. We can often simplify notation and use
Pr[EJ ] without confusion.
2.1
Standard vs. Adaptive Concentration Inequalities
There is a very large class of well known inequalities that bound the probability of large deviations by
confidence that increases exponentially w.r.t. bound tightness. An example is the Hoeffding inequality
[12] which states, using the definitions mentioned above,
?
Pr[Sn ? bn] ? e?2b
(1)
Other examples include Azuma?s inequality, Chernoff bound [7], and Bernstein inequalities [21].
However, these inequalities apply if n is a constant chosen in advance, or independent of the
underlying process, but are generally untrue when n is a stopping time J that, being a random
variable, depends on the process. In fact we shall later show in Theorem 3 that we can construct a
stopping time J such that
?
Pr[SJ ? bJ] = 1
(2)
for any b > 0, even when we put strong restrictions on J.
Comparing Eqs. (1) and (2), one clearly sees how Chernoff and Hoeffding bounds are applicable only
to algorithms whose decision to continue to sample or terminate is fixed a priori. This is a severe
limitation for stochastic algorithms that have uncertain stopping conditions that may depend on the
underlying process. We call a bound that holds for all possible stopping rules J an adaptive bound.
2.2
Equivalence Principle
We start with the observation that finding a probabilistic bound on the position of the random walk
SJ that holds for any stopping time J is equivalent to finding a deterministic boundary f (n) that the
walk is unlikely to ever cross. Formally,
2
Proposition 1. For any ? > 0,
Pr[SJ ? f (J)] ? ?
(3)
Pr[{?n, Sn ? f (n)}] ? ?
(4)
for any stopping time J if and only if
Intuitively, for any f (n) we can choose an adversarial stopping rule that terminates the process as
soon as the random walk crosses the boundary f (n). We can therefore achieve (3) for all stopping
times J only if we guarantee that the random walk is unlikely to ever cross f (n), as in Eq. (4).
2.3
Related Inequalities
The problem of studying the supremum of a random walk has a long history. The seminal work of
Kolmogorov and Khinchin [4] characterized the limiting behavior of a zero mean random walk with
unit variance:
Sn
lim sup ?
= 1 a.s.
2n log log n
n??
This law is called the Law of Iterated Logarithms (LIL), and sheds light on the limiting behavior of a
random walk. In our framework, this implies
h
i 1 if a < 1
p
lim Pr ?n > m : Sn ? 2an log log n =
m??
0 if a > 1
This theorem provides a very strong result on the asymptotic behavior of the walk. However, in most
ML and statistical applications, we are also interested in the finite-time behavior, which we study.
The problem of analyzing the finite-time properties of a random walk has been considered before
in the ML literature. It is well known, and can be easily proven using Hoeffding?s inequality union
bounded over all possible times, that a trivial bound
p
f (n) = n log(2n2 /?)/2
(5)
holds in the sense of Pr [?n, Sn ? f (n)] ? ?. This is true because by union bound and Hoeffding
inequality [12]
P r[?n, Sn ? f (n)] ?
?
X
P r[Sn ? f (n)] ?
n=1
?
X
2
e? log(2n
n=1
/? )
??
?
X
1
??
2n2
n=1
Recently, inspired by the Law of Iterated Logarithms, Jamieson et al. [15], Jamieson?and Nowak
[13] and Balsubramani [2] proposed a boundary f (n) that scales asymptotically as ?( n log log n)
such that the ?crossing event? {?n, Sn ? f (n)} is guaranteed to occur with a low probability.
They refer to this as finite time LIL inequality. These bounds, however, have significant room for
improvement. Furthermore, [2] holds asymptotically, i.e., only w.r.t. the event {?n > N, Sn ? f (n)}
for a sufficiently large (but finite) N , rather than across all time steps. In the following sections, we
develop general bounds that improve upon these methods.
3
New Adaptive Hoeffding-like Bounds
Our first main result is an alternative to finite time LIL that is both tighter and simpler:
Theorem 1 (Adaptive
Hoeffding Inequality). Let Xi be zero mean 1/2-subgaussian random variPn
ables. {Sn = i=1 Xi , n ? 1} be a random walk. Let f : N ? R+ . Then,
1. If limn?? ?
f (n)
(1/2)n log log n
< 1, there exists a distribution for X such that
Pr[{?n, Sn ? f (n)}] = 1
p
2. If f (n) = an log(logc n + 1) + bn, c > 1, a > c/2, b > 0, and ? is the Riemann-?
function, then
Pr[{?n, Sn ? f (n)}] ? ? (2a/c) e?2b/c
(6)
3
We also remark that in practice the values of a and c do not significantly affect the quality of the
bound. We recommend fixing a = 0.6 and c = 1.1 and will use this configuration in all subsequent
experiments. The parameter b is the main factor controlling the confidence we have on the bound (6),
i.e., the risk. The value of b is chosen so that the bound holds with probability at least 1 ? ?, where ?
is a user specified parameter.
Based on Proposition 1, and fixing a and c as above, we get a readily applicable corollary:
Corollary 1. Let J be any random variable taking value in N. If
p
f (n) = 0.6n log(log1.1 n + 1) + bn
then
Pr[SJ ? f (J)] ? 12e?1.8b
The bound we achieve is very similar in form to Hoeffding inequality (1), with an extra O(log log n)
slack to achieve robustness to stochastic, adaptively chosen stopping times. We shall refer to this
inequality as the Adaptive Hoeffding (AH) inequality.
Informally,?part 1 of Theorem 1 implies that if we choose a boundary f (n) that is convergent
p w.r.t. n log log n and would like to bound the probability of the threshold-crossing event,
(1/2)n log log n is the asymptotically smallest f (n) we can have; anything asymptotically smaller
will be crossed with probability 1. Furthermore, part 2 implies that as long as a > 1/2, we can
choose a sufficiently large b so that threshold crossing has an arbitrarily small probability. Combined,
we thus have that for any ? > 0, the minimum f (call it f ? ) needed to ensure an arbitrarily small
threshold-crossing probability can be bounded asymptotically as follows:
p
p
p p
1/2 n log log n ? f ? (n) ? ( 1/2 + ?) n log log n
(7)
This fact is illustrated in Figure 1, where we
plot the bound f (n) from Corollary 1 with
12e?1.8b = ? = 0.05 (AH, green). The corresponding Hoeffding bound (red) that would have
held (with the same confidence, had n been a
constant) is plotted as well. We also show draws
from an unbiased random walk (blue). Out of
the 1000 draws we sampled, approximately 25%
of them cross the Hoeffding bound (red) before
time 105 , while none of them cross the adaptive
bound (green),
demonstrating the necessity of
?
the extra log log n factor even in practice.
We also compare our bound with the trivial
bound (5), LIL bound in Lemma 1 of [15] and
Theorem 2 of [2]. The graph in Figure 2 shows
the relative performance of the three bounds
across different values of n and risk ?. The LIL
bound of [15] is plotted with parameter = 0.01
as recommended. We also experimented with
other values of , obtaining qualitatively similar
results. It can be seen that our bound is significantly tighter (by roughly a factor of 1.5) across
all values of n and ? that we evaluated.
3.1
Figure 1: Illustration of Theorem 1 part 2. Each
blue line represents a sampled walk. Although the
probability of reaching higher than the Hoeffding
bound (red) at a given time is small, the threshold
is crossed almost surely. The new bound (green)
remains unlikely to be crossed.
More General, Non-Smooth Boundaries
If we relax the requirement that f (n) must be smooth, or, formally, remove the condition that
lim ?
n??
f (n)
n log log n
must exist or go to ?, then we might be able to obtain tighter bounds.
4
Figure 2: Comparison of Adaptive Hoeffding (AH) and LIL [15], LIL2 [2] and Trivial bound. A
threshold function f (n) is computed and plotted according to the four bounds, so that crossing occurs
with bounded probability ? (risk). The two plots correspond to different risk levels (0.01 and 0.1).
For example many algorithms such as median elimination [9] or the exponential gap algorithm [17, 6]
make (sampling) decisions ?in batch?, and therefore can only stop at certain pre-defined times. The
intuition is that if more samples are collected between decisions, the failure probability can be easier
to control. This is equivalent to restricting the stopping time J to take values in a set N ? N.
Equivalently we can also think of using a boundary function f (n) defined as follows:
fN (n) =
f (n) n ? N
+? otherwise
(8)
Very often the set N is taken to be the following set:
Definition 3 (Exponentially Sparse Stopping Time). We denote by Nc , c > 1, the set Nc = {dcn e :
n ? N}.
Methods based on exponentially sparse stopping times often achieve asymptotically optimal performance on a range of sequential decision making problems [9, 18, 17]. Here we construct an
alternative to Theorem 1 based on exponentially sparse stopping times. We obtain a bound that is
asymptotically equivalent, but has better constants and is often more effective in practice.
Theorem 2 (Exponentially Sparse Adaptive Hoeffding Inequality). Let {Sn , n ? 1} be a random
walk with 1/2-subgaussian increments. If
p
f (n) = an log(logc n + 1) + bn
and c > 1, a > 1/2, b > 0, we have
Pr[{?n ? Nc , Sn ? f (n)}] ? ?(2a) e?2b
We call this inequality the exponentially sparse adaptive Hoeffding (ESAH) inequality. Compared to
(6), the main improvement is the lack of the constant c in the RHS. In all subsequent experiments we
fix a = 0.55 and c = 1.05.
Finally, we provide limits for any boundary, including those obtained by a batch-sampling strategy.
Theorem 3. Let {Sn , n ? 1} be a zero mean random walk with 1/2-subgaussian increments. Let
f : N ? R+ . Then
1. If there exists a constant C ? 0 such that lim inf n??
f (n)
?
n
< C, then
Pr[{?n, Sn ? f (n)}] = 1
2. If limn??
f (n)
?
n
= +?, then for any ? > 0 there exists an infinite set N ? N such that
Pr[{?n ? N, Sn ? f (n)}] < ?
5
Informally, part
? 1 states that if a threshold f (n) drops an infinite number of times below an asymptotic
bound of ?( n), then the threshold will be crossed
? with probability 1. This rules out Hoeffding-like
bounds. If f (n) grows asymptotically faster than n, then one can ?sparsify? f (n) so that it will be
crossed with an arbitrarily small probability. In particular, a boundary with the form in Equation (8)
can be constructed to bound the threshold-crossing probability below any ? (part 2 of the Theorem).
4
Applications to ML and Statistics
We now apply our adaptive bound results to design new algorithms for various classic problems in ML
and statistics. Our bounds can be used to analyze algorithms for many natural sequential problems,
leading to a unified framework for such analysis. The resulting algorithms are asymptotically optimal
or near optimal, and outperform competing algorithms in practice. We provide two applications in
the following subsections and leave another to the appendix.
4.1
Sequential Testing for Positiveness of Mean
Our first example is sequential testing for the positiveness of the mean of a bounded random variable.
In this problem, there is a 1/2-subgaussian random variable X with (unknown) mean ? 6= 0. At each
step, an agent can either request a sample from X, or terminate and declare whether or not E[X] > 0.
The goal is to bound the agent?s error probability by some user specified value ?.
This problem is well studied [10, 18, 6]. In particular Karp and Kleinberg [18] show in Lemma 3.2
2
(?second simulation lemma?) that this problem can be solved with an O log(1/?) log log(1/?)/?
2
algorithm with confidence 1 ? ?. They also prove a lower bound of ? log log(1/?)/? . Recently,
Chen and Li [6] referred to this problem as the SIGN-? problem and provided similar results.
We propose an algorithm that achieves the optimal asymptotic complexity and performs very well
in practice, outperforming competing algorithms by a wide margin (because of better asymptotic
constants). The algorithm is captured by the following definition.
? R+ be a function. We draw i.i.d. samples
Definition 4 (Boundary Sequential Test). Let f : N P
n
X1 , X2 , . . . from the target distribution X. Let Sn = i=1 Xi be the corresponding partial sum.
1. If Sn ? f (n), terminate and declare E[X] > 0;
2. if Sn ? ?f (n), terminate and declare E[X] < 0;
3. otherwise increment n and obtain a new sample.
We call such a test a symmetric boundary test. In the following theorem we analyze its performance.
Theorem 4. Let ? > 0 and X be any 1/2-subgaussian distribution with non-zero mean. Let
p
f (n) = an log(logc n + 1) + bn
Figure 3: Empirical Performance of Boundary Tests. The plot on the left is the algorithm in
Definition 4 and Theorem 4 with ? = 0.05, the plot on the right uses half the correct threshold.
Despite of a speed up of 4 times, the empirical accuracy drops below the requirement
6
where c > 1, a > c/2, and b = c/2 log ? (2a/c) + c/2 log 1/?. Then, with probability at least 1 ? ?,
a symmetric boundary test terminates with the correct sign for E[X], and with probability 1 ? ?, for
any > 0 it terminates in at most
log(1/?) log log(1/?)
(2c + )
?2
samples asymptotically w.r.t. 1/? and 1/?.
4.1.1
Experiments
To evaluate the empirical performance of our algorithm (AH-RW), we run an experiment where
X is a Bernoulli distribution over {?1/2, 1/2}, for various values of the mean parameter ?. The
confidence level ? is set to 0.05, and the results are averaged across 100 independent runs. For this
experiment and other experiments in this section, we set the parameters a = 0.6 and c = 1.1. We
plot in Figure 3 the empirical accuracy, average number of samples used (runtime), and the number
of samples after which 90% of the runs terminate.
The empirical accuracy of AH-RW is very high,
as predicted by Theorem 4. Our bound is empirically very tight. If we decrease the bound by
a factor of 2, that is we use f (n)/2 instead of
f (n), we get the curve in the right hand side plot
of Figure 3. Despite a speed up of approximately
4 times, the empirical accuracy gets below the
0.95 requirement, especially when ? is small.
We also compare our method, AH-RW, to the
Exponential Gap algorithm from [6] and the algorithm from the ?second simulation lemma?
of [18]. Both of these algorithms rely on a
batch sampling idea and have very similar performance. The results show that our algorithm
is at least an order of magnitude faster (note
the log-scale). We also evaluate a variant of
our algorithm (ESAH-RW) where the boundary
function f (n) is taken to be fNc as in Theorem 2
and Equation (8). This algorithm achieves very
similar performance as Theorem 4, justifying
the practical applicability of batch sampling.
4.2
Figure 4: Comparison of various algorithms for deciding the positiveness of the mean of a Bernoulli
random variable. AH-RW and ESAH-RW use orders of magnitude fewer samples than alternatives.
Best Arm Identification
The MAB (Multi-Arm Bandit) problem [1, 5] studies the optimal behavior of an agent when faced
with a set of choices with unknown rewards. There are several flavors of the problem. In this paper,
we focus on the fixed confidence best arm identification problem [13]. In this setting, the agent
is presented with a set of arms A, where the arms are indistinguishable except for their expected
reward. The agent is to make sequential decisions at each time step to either pull an arm ? ? A, or to
terminate and declare one arm to have the largest expected reward. The goal is to identify the best
arm with a probability of error smaller than some pre-specified ? > 0.
To facilitate the discussion, we first define the notation we will use. We denote by K = |A| as the
total number of arms. We denote by ?? the true mean of an arm, ?? = arg max ?? , We also define
?
?? (n? ) as the empirical mean after n? pulls of an arm.
This problem has been extensively studied, including recently [8, 14, 17, 15, 6]. A survey is presented
by Jamieson and Nowak [13], who classify existing algorithms into three classes: action elimination
based [8, 14, 17, 6], which achieve good asymptotics but often perform unsatisfactorily in practice;
UCB based, such as lil?UCB by [15]; and LUCB based approaches, such as [16, 13], which achieve
sub-optimal asymptotics of O(K log K) but perform very well in practice. We provide a new
algorithm that out-performs all previous algorithm, including LUCB, in Algorithm 1.
Theorem 5. For any ? > 0, with probability 1 ? ?, Algorithm 1 outputs the optimal arm.
7
Algorithm 1 Adaptive Hoeffding Race (set of arms A, K = |A|, parameter ?)
fix parameters a = 0.6, c = 1.1, b = c/2 (log ? (2a/c) + log(2/?))
? = A be the set of remaining arms
initialize for all arms ? ? A, n? = 0, initialize A
?
while A has more than one arm do
?
Let ?
? ? be the arm with highest empirical mean, and compute for all ? ? A
?r
?
?
a log(logc n? + 1) + b + c log |A|/2
/n? if ? = ?
??
f? (n? ) = p
?
(a log(logc n? + 1) + b) /n?
otherwise
? n? = n? + 1
draw a sample from the arm with largest value of f? (n? ) from A,
?
remove from A arm ? if ?
?a + f? (n? ) < ?
??? ? ? f?? ? (n?? ? )
end while
?
return the only element in A
4.2.1
Experiments
We implemented Algorithm 1 and a variant
where the boundary f is set to fNc as in Theorem 2. We call this alternative version ES-AHR,
standing for exponentially sparse adaptive Hoeffding race. For comparison we implemented
the lil?UCB and lil?UCB+LS described in [14],
and lil?LUCB described in [13]. Based on the
results of [13], these algorithms are the fastest
known to date.
We also implemented the DISTRIBUTIONBASED-ELIMINATION from [6], which theoretically is the state-of-the-art in terms of asymptotic complexity. Despite this fact, the empirical
performance is orders of magnitude worse compared to other algorithms for the instance sizes Figure 5: Comparison of various methods for best
arm identification. Our methods AHR and ESwe experimented with.
AHR are significantly faster than state-of-the-art.
We experimented with most of the distribution
Batch sampling ES-AHR is the most effective one.
families considered in [13] and found qualitatively similar results. We only report results using the most challenging distribution we found
0.6
that was presented in that survey, where ?i = 1 ? (i/K) . The distributions
are Gaussian with 1/4
P
variance, and ? = 0.05. The sample count is measured in units of H1 = ?6=?? ??2
? hardness [13].
5
Conclusions
We studied the threshold crossing behavior of random walks, and provided new concentration
inequalities that, unlike classic Hoeffding-style bounds, hold for any stopping rule. We showed that
these inequalities can be applied to various problems, such as testing for positiveness of mean, best
arm identification, obtaining algorithms that perform well both in theory and in practice.
Acknowledgments
This research was supported by NSF (#1649208) and Future of Life Institute (#2016-158687).
References
[1] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
2002.
8
[2] A. Balsubramani. Sharp Finite-Time Iterated-Logarithm Martingale Concentration. ArXiv e-prints, May
2014. URL https://arxiv.org/abs/1405.2639.
[3] A. Balsubramani and A. Ramdas. Sequential Nonparametric Testing with the Law of the Iterated Logarithm.
ArXiv e-prints, June 2015. URL https://arxiv.org/abs/1506.03486.
[4] Leo Breiman. Probability. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1992.
ISBN 0-89871-296-3.
[5] Nicolo Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge university press,
2006.
[6] Lijie Chen and Jian Li. On the optimal sample complexity for best arm identification.
abs/1511.03774, 2015. URL http://arxiv.org/abs/1511.03774.
CoRR,
[7] Fan Chung and Linyuan Lu. Concentration inequalities and martingale inequalities: a survey. Internet
Math., 3(1):79?127, 2006. URL http://projecteuclid.org/euclid.im/1175266369.
[8] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. PAC bounds for multi-armed bandit and Markov
decision processes. In Jyrki Kivinen and Robert H. Sloan, editors, Computational Learning Theory, volume
2375 of Lecture Notes in Computer Science, pages 255?270. Springer Berlin Heidelberg, 2002.
[9] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problem. Journal of Machine Learning Research (JMLR),
2006.
[10] R. H. Farrell. Asymptotic behavior of expected sample size in certain one sided tests. Ann. Math. Statist.,
35(1):36?72, 03 1964.
[11] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
Statistical Association, 1963.
[12] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
statistical association, 58(301):13?30, 1963.
[13] Kevin Jamieson and Robert Nowak. Best-arm identification algorithms for multi-armed bandits in the
fixed confidence setting, 2014.
[14] Kevin Jamieson, Matthew Malloy, R. Nowak, and S. Bubeck. On finding the largest mean among many.
ArXiv e-prints, June 2013.
[15] Kevin Jamieson, Matthew Malloy, Robert Nowak, and S?bastien Bubeck. lil?UCB : An optimal exploration
algorithm for multi-armed bandits. Journal of Machine Learning Research (JMLR), 2014.
[16] Shivaram Kalyanakrishnan, Ambuj Tewari, Peter Auer, and Peter Stone. PAC subset selection in stochastic
multi-armed bandits. In ICML-2012, pages 655?662, New York, NY, USA, June-July 2012.
[17] Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In
ICML-2013, volume 28, pages 1238?1246. JMLR Workshop and Conference Proceedings, May 2013.
[18] Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In Proceedings
of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?07, pages 881?890,
Philadelphia, PA, USA, 2007.
[19] Volodymyr Mnih, Csaba Szepesv?ri, and Jean-Yves Audibert. Empirical bernstein stopping. In ICML-2008,
pages 672?679, New York, NY, USA, 2008.
[20] Omar Rivasplata. Subgaussian random variables: An expository note, 2012.
[21] Pranab K. Sen and Julio M. Singer. Large Sample Methods in Statistics: An Introduction with Applications.
Chapman and Hall, 1993.
9
| 6493 |@word version:1 simulation:2 bn:5 kalyanakrishnan:1 necessity:1 configuration:1 ours:1 interestingly:1 existing:2 com:1 comparing:1 must:2 readily:1 fn:1 subsequent:2 analytic:1 remove:2 plot:6 drop:2 v:1 intelligence:1 half:1 fewer:1 provides:1 math:2 mannor:2 org:5 simpler:1 constructed:1 symposium:1 prove:1 wassily:2 overhead:1 introduce:2 theoretically:3 hardness:1 expected:3 roughly:1 behavior:7 multi:8 inspired:1 riemann:1 armed:6 spain:1 begin:1 bounded:7 notation:5 underlying:2 provided:2 what:1 unified:1 finding:3 csaba:1 untrue:1 guarantee:2 shed:1 runtime:1 wrong:1 scaled:1 control:2 unit:2 jamieson:6 positive:1 before:3 declare:4 tsinghua:2 limit:2 despite:3 analyzing:1 abuse:1 approximately:2 might:2 lugosi:1 studied:3 equivalence:1 collect:1 challenging:1 fastest:1 range:1 averaged:1 decided:1 practical:1 acknowledgment:1 linyuan:1 testing:8 projecteuclid:1 union:3 practice:8 asymptotics:2 empirical:10 significantly:4 confidence:8 pre:2 get:3 cannot:1 selection:1 put:1 risk:6 seminal:1 restriction:2 equivalent:3 deterministic:1 eighteenth:1 dcn:1 go:1 l:1 survey:3 rule:4 pull:2 population:1 classic:2 increment:3 limiting:2 controlling:1 target:1 yishay:2 user:2 us:1 designing:1 hypothesis:3 pa:2 crossing:7 element:1 solved:1 decrease:1 highest:1 mentioned:1 intuition:1 environment:2 complexity:3 reward:3 depend:1 tight:2 serve:1 upon:1 easily:1 various:5 kolmogorov:1 leo:1 effective:2 artificial:1 kevin:3 whose:1 jean:1 stanford:3 say:2 tightness:1 relax:1 otherwise:3 statistic:3 fischer:1 think:1 itself:1 noisy:1 hoc:1 isbn:1 sen:1 propose:1 date:1 achieve:7 double:1 requirement:3 leave:1 develop:1 fixing:2 measured:1 eq:2 strong:2 implemented:3 c:1 predicted:1 come:1 implies:3 sabharwal:1 correct:2 stochastic:5 exploration:2 ermon:2 elimination:4 fix:2 really:1 mab:2 proposition:2 tighter:4 im:1 hold:11 sufficiently:2 considered:2 hall:1 deciding:1 bj:1 matthew:2 achieves:2 smallest:1 applicable:3 largest:3 tool:1 clearly:1 gaussian:2 rather:1 reaching:1 zhou:1 ej:4 breiman:1 sparsify:1 karp:2 corollary:3 focus:1 june:3 shengjia:1 improvement:2 bernoulli:2 greatly:1 industrial:1 adversarial:1 sense:1 inference:1 stopping:17 streaming:1 typically:1 unlikely:4 bandit:9 interested:1 issue:1 among:2 arg:1 priori:2 art:4 special:1 initialize:2 logc:5 construct:2 karnin:1 sampling:5 chernoff:3 chapman:1 represents:1 icml:3 discrepancy:1 future:1 report:1 recommend:1 simplify:3 richard:1 few:1 ab:4 mnih:1 severe:1 light:1 held:1 nowak:5 partial:1 logarithm:4 walk:17 plotted:3 uncertain:1 instance:1 classify:1 applicability:1 deviation:2 subset:1 too:2 combined:1 adaptively:4 fundamental:1 siam:1 standing:1 probabilistic:2 shivaram:1 ashish:1 cesa:2 choose:3 hoeffding:25 henceforth:1 worse:1 american:2 zhao:1 inefficient:1 leading:1 return:1 li:2 style:1 chung:1 volodymyr:1 sloan:1 race:2 ad:1 depends:1 crossed:5 later:1 h1:1 audibert:1 farrell:1 eyal:2 observing:1 analyze:3 sup:1 start:1 red:3 yves:1 accuracy:4 variance:2 who:1 correspond:1 identify:1 identification:9 bor:1 iterated:4 euclid:1 none:1 lu:1 history:1 ah:7 ed:1 definition:8 failure:1 involved:1 fnc:2 sampled:2 stop:1 lim:4 subsection:1 auer:2 higher:1 improved:1 evaluated:1 though:1 furthermore:2 hand:2 lack:1 quality:1 grows:1 facilitate:1 usa:4 true:2 unbiased:1 symmetric:2 illustrated:1 indistinguishable:1 game:1 width:1 acquires:1 anything:1 stone:1 complete:1 confusion:1 performs:2 allen:1 stefano:1 recently:3 common:2 empirically:3 exponentially:7 volume:2 association:2 refer:2 positiveness:5 significant:1 multiarmed:1 cambridge:1 ai:2 mathematics:1 had:1 etc:1 nicolo:2 showed:1 inf:1 certain:2 inequality:37 outperforming:1 continue:1 arbitrarily:3 life:1 binary:1 seen:1 minimum:1 captured:1 surely:1 determine:2 recommended:1 july:1 sound:1 smooth:2 faster:3 characterized:1 cross:5 long:2 justifying:1 prediction:1 variant:2 arxiv:6 oren:1 szepesv:1 want:1 interval:1 median:1 jian:1 limn:3 extra:2 unlike:1 shie:2 call:5 subgaussian:17 near:1 exceed:1 bernstein:2 affect:1 fit:1 competing:2 suboptimal:1 idea:1 whether:2 url:4 peter:3 york:2 remark:1 action:2 dar:2 generally:1 tewari:1 involve:2 informally:2 nonparametric:1 rival:2 erx:1 extensively:1 statist:1 rw:6 http:4 outperform:2 exist:1 nsf:1 sign:2 blue:2 discrete:1 shall:3 key:2 four:1 threshold:10 demonstrating:1 khinchin:1 asymptotically:10 graph:1 sum:3 run:3 soda:1 almost:2 family:1 decide:2 draw:4 decision:15 appendix:1 bound:47 internet:1 guaranteed:1 koren:1 convergent:1 fan:1 annual:1 occur:1 infinity:1 x2:3 ri:1 kleinberg:2 speed:2 ables:1 according:1 expository:1 request:1 terminates:3 across:4 smaller:2 making:2 intuitively:1 pr:17 sided:1 taken:2 equation:2 remains:1 slack:1 count:1 needed:2 singer:1 end:1 studying:1 malloy:2 apply:3 balsubramani:3 occurrence:1 alternative:4 robustness:1 batch:5 remaining:1 include:1 ensure:1 especially:1 society:1 print:3 occurs:1 strategy:2 concentration:9 traditional:1 berlin:1 omar:1 collected:2 trivial:3 modeled:1 illustration:1 acquire:1 equivalently:1 setup:1 nc:3 robert:4 design:4 lil:11 unknown:2 perform:3 bianchi:2 observation:1 markov:1 finite:7 ever:2 mansour:2 sharp:1 tomer:1 required:2 specified:3 barcelona:1 nip:1 zohar:1 able:1 below:4 azuma:2 challenge:2 ambuj:1 reliable:1 including:4 green:3 max:1 event:4 natural:3 rely:1 kivinen:1 arm:27 improve:1 log1:1 philadelphia:2 sn:21 faced:1 literature:1 asymptotic:6 law:4 relative:1 lecture:1 limitation:1 proven:1 agent:11 principle:1 editor:1 bypass:1 supported:1 soon:1 side:1 institute:2 wide:1 taking:2 sparse:6 boundary:15 curve:1 collection:1 adaptive:14 qualitatively:2 reinforcement:1 sj:4 supremum:1 ml:5 xi:4 search:1 terminate:7 obtaining:3 somekh:1 interact:1 heidelberg:1 main:4 rh:1 bounding:1 paul:1 n2:2 ramdas:1 ahr:4 x1:3 referred:1 martingale:2 ny:2 sub:1 position:1 exponential:2 jmlr:3 theorem:18 bastien:1 pac:2 experimented:3 exists:3 workshop:1 restricting:1 sequential:15 corr:1 magnitude:3 margin:1 sorting:2 gap:2 easier:1 chen:2 flavor:1 logarithmic:1 bubeck:2 springer:1 acm:1 goal:3 jyrki:1 ann:1 room:1 infinite:2 except:1 lemma:5 called:1 total:1 lucb:3 e:2 ucb:5 formally:2 evaluate:2 |
6,073 | 6,494 | Threshold Learning for Optimal Decision Making
Nathan F. Lepora
Department of Engineering Mathematics, University of Bristol, UK
n.lepora@bristol.ac.uk
Abstract
Decision making under uncertainty is commonly modelled as a process of competitive stochastic evidence accumulation to threshold (the drift-diffusion model).
However, it is unknown how animals learn these decision thresholds. We examine
threshold learning by constructing a reward function that averages over many trials
to Wald?s cost function that defines decision optimality. These rewards are highly
stochastic and hence challenging to optimize, which we address in two ways: first,
a simple two-factor reward-modulated learning rule derived from Williams? REINFORCE method for neural networks; and second, Bayesian optimization of the
reward function with a Gaussian process. Bayesian optimization converges in fewer
trials than REINFORCE but is slower computationally with greater variance. The
REINFORCE method is also a better model of acquisition behaviour in animals
and a similar learning rule has been proposed for modelling basal ganglia function.
1
Introduction
The standard view of perceptual decision making across psychology and neuroscience is of a
competitive process that accumulates sensory evidence for the choices up to a threshold (bound)
that triggers the decision [1, 2, 3]. While there is debate about whether humans and animals are
?optimal?, nonetheless the standard psychological model of this process for two-alternative forced
choices (the drift-diffusion model [1]) is a special case of an optimal statistical test for selecting
between two hypotheses (the sequential probability ratio test, or SPRT [4]). Formally, this sequential
test optimizes a cost function linear in the decision time and type I/II errors averaged over many
trials [4]. Thus, under broad assumptions about the decision process, the optimal behaviour is simply
to stop gathering data after reaching a threshold independent of the data history and collection time.
However, there remains the problem of how to set these decision thresholds. While there is consensus
that an animal tunes its decision making by maximizing mean reward ([3, Chapter 5],[5, 6, 7, 8, 9, 10]),
the learning rule is not known. More generally, it is unknown how an animal tunes its propensity
towards making choices while also tuning its overall speed-accuracy balance.
Here we show that optimization of the decision thresholds can be considered as reinforcement learning
over single trial rewards derived from Wald?s trial averaged cost function considered previously.
However, these single trial rewards are highly stochastic and their average has a broad flat peak
(Fig. 1B), constituting a challenging optimization problem that will defeat standard methods. We
address this challenge by proposing two distinct ways to learn the decision thresholds, with one
approach closer to learning rules from neuroscience and the other to machine learning. The first
approach is a learning rule derived from Williams? REINFORCE algorithm for training neural
networks [11], which we here combine with an appropriate policy for controlling the thresholds for
optimal decision making. The second is a Bayesian optimization method that fits a Gaussian process
to the reward function and samples according to the mean reward and reward variance [12, 13, 14].
We find that both methods can successfully learn the thresholds, as validated by comparison against
an exhaustive optimization of the reward function. Bayesian optimization converges in fewer trials
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
A
B
Drift-diffusion model
Reward with threshold
0
5
reward, R
evidence, z
10
threshold, ? 1
0
threshold, -? 0
-5
-10
-0.5
-1
optimal threshold
-1.5
? *0 = ? 1 *
-2
0
5
10
15
20
25
30
0
decision time, t
2
4
6
8
10
equal thresholds, ? 0 = ? 1
Figure 1: (A) Drift-diffusion model, representing a noisy stochastic accumulation until reaching a
threshold when the decision is made. The optimal threshold maximizes the mean reward (equation 5).
(B) Sampled rewards over 1000 trials with equal thresholds ?0 = ?1 (dotted markers); the average
reward function is estimated from Gaussian process regression (red curve). Optimizing the thresholds
is a challenging problem, particularly when the two thresholds are not equal.
(?102 ) than REINFORCE (?103 ) but is 100-times more computationally expensive with about triple
the variance in the threshold estimates. Initial validation is with one decision threshold, corresponding
to equal costs of type I/II errors. The methods scale well to two thresholds (unequal costs), and we
use REINFORCE to map the full decision performance over both costs. Finally, we compare both
methods with experimental two-alternative forced choice data, and find that REINFORCE gives a
better account of the acquisition (learning) phase, such as converging over a similar number of trials.
2
Background to the drift-diffusion model and SPRT
The drift-diffusion model (DDM) of Ratcliff and colleagues is a standard approach for modeling the
results of two-alternative forced choice (2AFC) experiments in psychophysics [1, 15]. A decision
variable z(t) represents the sensory evidence accumulated to time t from a starting bias z(0) = z0 .
Discretizing time in uniform steps (assumed integer without losing generality), the update equation is
z(t + 1) = z(t) + ?z,
?z ? N (?, ? 2 ),
(1)
where ?z is the increment of sensory evidence at time t, which is conventionally assumed drawn
from a normal distribution N (?, ? 2 ) of mean ? and variance ? 2 . The decision criterion is that the
accumulated evidence crosses one of two decision thresholds, assumed at ??0 < 0 < ?1 .
Wald?s sequential probability ratio test (SPRT) optimally determines whether one of two hypotheses
H0 , H1 is supported by gathering samples x(t) until a confident decision can be made [4]. It is
optimal in that it minimizes the average sample size among all sequential tests to the same error
probabilities. The SPRT can be derived from applying Bayes? rule recursively to sampled data, from
when the log posterior ratio log PR(t) passes one of two decision thresholds ??0 < 0 < ?1 :
p(H1 |x(t))
p(x(t)|H1 )
log PR(t + 1) = log PR(t) + log LR(t), PR(t) =
, LR(t) =
, (2)
p(H0 |x(t))
p(x(t)|H0 )
beginning from priors at time zero: PR(0) = p(H1 )/p(H0 ). The right-hand side of equation (2) can
also be written as a log likelihood ratio log LR(t) summed over time t (by iterative substitution).
The DDM is recognized as a special case of SPRT by setting the likelihoods as two equi-variant
Gaussians N (?1 , ?), N (?0 , ?), so that
2
2
p(x|H1 )
e?(x??1 ) /2?
??
?2 ? ?2
= log ?(x?? )2 /2?2 = 2 x + d,
?? = ?1 ? ?0 , d = 0 2 1 .
(3)
0
p(x|H0 )
?
2?
e
The integrated evidence z(t) in (1) then coincides with the log posterior ratio in (2) and the increments
?z with the log likelihood ratio in (2).
log
3
3.1
Methods to optimize the decision threshold
Reinforcement learning for optimal decision making
A general statement of decision optimality can be made in terms of minimizing the Bayes risk [4].
This cost function is linear in the type I and II error probabilities ?1 = P (H1 |H0 ) = E1 (e) and
2
?0 = P (H0 |H1 ) = E0 (e), where the decision error e = {0, 1} for correct/incorrect trials, and is
also linear in the expected stopping times for each decision outcome 1
Crisk := 12 (W0 ?0 + c E0 [T ]) + 12 (W1 ?1 + c E1 [T ]),
(4)
with type I/II error costs W0 , W1 > 0 and cost of time c. That the Bayes risk Crisk has a unique
minimum follows from the error probabilities ?0 , ?1 monotonically decreasing and the expected
stopping times E0 [T ], E1 [T ] monotonically increasing with increasing threshold ?0 or ?1 . For each
pair (W0 /c, W1 /c), there is thus a unique threshold pair (?0? , ?1? ) that minimizes Crisk .
We introduce reward into the formalism by supposing that an application of the SPRT with thresholds
(?0 , ?1 ) has a penalty proportional to the stopping time T and decision outcome
(
?W0 ? cT,
incorrect decision of hypothesis H0
?W1 ? cT,
incorrect decision of hypothesis H1
R=
(5)
?cT,
correct decision of hypothesis H0 or H1 .
Over many decision trials, the average reward is thus hRi = ?Crisk , the negative of the Bayes risk.
Reinforcement learning can then be used to find the optimal thresholds to maximize reward and thus
optimize the Bayes risk. Over many trials n = 1, 2, . . . , N with reward R(n), the problem is to
estimate these optimal thresholds (?0? , ?1? ) while maintaining minimal regret: the difference between
the reward sum of the optimal decision policy and the sum of the collected rewards
PN
?(N ) = ?N Crisk (?0? , ?1? ) ? n=1 R(n).
(6)
This is recognized as a multi-armed bandit problem with a continuous two-dimensional action space
parametrized by the threshold pairs (?0 , ?1 ).
The optimization problem of finding the thresholds that maximize mean reward is highly challenging
because of the stochastic decision times and errors. Standard approaches such as gradient ascent fail
and even state-of-the-art approaches such as cross-entropy or natural evolution strategies are ineffective. A successful approach must combine reward averaging with learning (in a more sophisticated
way than batch-averaging or filtering). We now consider two distinct approaches for this.
3.2
REINFORCE method
The first approach to optimize the decision threshold is a standard 2-factor learning rule derived
from Williams? REINFORCE algorithm for training neural networks [11], but modified to the novel
application of continuous bandits. From a modern perspective, the REINFORCE algorithm is seen as
an example of a policy gradient method [16, 17]. These are well-suited to reinforcement learning with
continuous action spaces, because they use gradient descent to optimize continuously parameterized
policies with respect to cumulative reward.
We consider the decision thresholds (?0 , ?1 ) to parametrize actions that correspond to making a
single decision with those thresholds. Here we use a policy that expresses the threshold as a linear
combination of binary unit outputs, with fixed coefficients specifying the contribution of each unit
?0 =
ns
X
sj yj ,
?1 =
2ns
X
sj yj .
j=ns +1
j=1
(7)
Exponential coefficients were found to work well (equivalent to binary encoding), scaled to give a
range of thresholds from zero to ?max :
(1/2)j
?max ,
(8)
1 ? (1/2)ns
where here we use ns = 10 units per threshold with maximum threshold ?max = 10. The benefit
of this policy (7,8) is that the learning rule can be expressed in terms of the binary unit outputs
yj = {0, 1}, which are the variables considered in the REINFORCE learning rule [11].
sj = sns +j =
Following Williams, the policy choosing the threshold on a trial is stochastic by virtue of the binary
unit outputs yj = {0, 1} being distributed according to a logistic function of weights wj , such that
1
yj ? p(yj |wj ) = f (wj )yj + (1 ? f (wj ))(1 ? yj ), f (wj ) =
.
(9)
1 + e?wj
1
The full expression has prior probabilities for the frequency of each outcome, which are here assumed equal.
3
The REINFORCE learning rule for these weights is determined by the reward R(n) on trial n
?wj = ? [yj (t) ? f (wj )] R(n),
(10)
with learning rate ? (here generally taken as 0.1). An improvement to the learning rule can be
?
? ? 1)
made with reinforcement comparison, with a reference reward R(n)
= ?R(n) + (1 ? ?)R(n
subtracted from R(n); a value ? = 0.5 was found to be effective, and is used in all simulations using
the REINFORCE rule in this paper.
The power of the REINFORCE learning rule is that the weight change is equal to the gradient of
w ) = E[R{?} ] over all possible threshold sequences {?}. Thus, a single-trial
the expected return J(w
learning rule performs like stochastic gradient ascent averaged over many trials. Note also that the
neural network input xi of the original formalism [11] is here set to x1 = 1, but a non-trivial input
could be used to aid learning recall and generalization (see discussion). Overall, the learning follows
a reward-modulated two-factor rule that recruits units distributed according to an exponential size
principle, and thus resembles models of biological motor learning.
3.3
Bayesian optimization method
The second approach is to use Bayesian optimization to find the optimal thresholds from iteratively
building a probabilistic model of the reward function that is used to guide future sampling [12, 13, 14].
Bayesian optimization typically uses a Gaussian process model, which provides a nonlinear regression
model both of the mean reward and the reward variance with decision threshold. This model can then
be used to guide future threshold choice via maximising an acquisition function of these quantities.
The basic algorithm for Bayesian optimization is as follows:
Algorithm Bayesian optimization applied to optimal decision making
for n=1 to N do
New thresholds from optimizing acquisition function (?0 , ?1 )n = argmax ?(?0 , ?1 ; Dn?1 )
(?0 ,?1 )
Make the decision with thresholds (?0 , ?1 )n to find reward R(n)
Augment data by including new samples Dn = (Dn?1 ; (?0 , ?1 )n , R(n))
Update the statistical (Gaussian process) model of the rewards
end for
Following other work on Bayesian optimization, we model the reward dependence on the decision
thresholds with a Gaussian process
R(?0 , ?1 ) ? GP[m(?0 , ?1 ), k(?0 , ?1 ; ?00 , ?10 )],
(11)
with mean m(?0 , ?1 ) = E[R(?0 , ?1 )] and covariance modelled by a squared-exponential function
(12)
k(?0 , ?1 ; ?00 , ?10 ) = ?f2 exp ? ?2 ||(?0 , ?1 ) ? (?00 , ?10 )||2 .
The fitting of the hyperparameters ?f2 , ? used standard methods [18] (GPML toolbox and a quasiNewton optimizer in MATLAB). In principle, the two thresholds could each have distinct hyperparameters, but we use one to maintain the symmetry ?0 ? ?1 of the decision problem.
The choice of decision thresholds is viewed as a sampling problem, and represented by maximizing
an acquisition function of the decision thresholds that trades off exploration and exploitation. Here we
use the probability of improvement, which guides the sampling towards regions of high uncertainty
and reward by maximizing the chance of improving the present best estimate:
m(?0 , ?1 ) ? R(?0? , ?1? )
(?0 , ?1 )n = argmax ?(?0 , ?1 ),
?(?0 , ?1 ) = ?
,
(13)
k(?0 , ?1 ; ?0 , ?1 )
(?0 ,?1 )
where (?0? , ?1? ) are the threshold estimates that have given the greatest reward and ? is the normal
cumulative distribution function. Usually one would include a noise parameter for exploration, but
because the decision making is stochastic we use the noise from that process instead.
4
B
Decision accuracy
C
Decision time
20
0.3
0.2
0.1
0
15
reward, R
0.4
decision time, T
decision error, e
0.5
10
5
1000 2000 3000 4000 5000
-0.2
8
-0.4
-0.6
1000 2000 3000 4000 5000
trials, N
?1
?0
6
4
2
-1
0
Decision threshold
10
-0.8
0
0
D
Reward
0
thresholds
A
0
0
1000 2000 3000 4000 5000
trials, N
0
1000 2000 3000 4000 5000
trials, N
trials, N
Figure 2: REINFORCE learning (exponential coefficients) of the two decision thresholds over a
single learning episode. Decision costs c = 0.05, W0 = 0.1 and W1 = 1. Plots are smoothed over 50
trials. The red curve is the average accuracy by trial number (fitted to a cumulative Weibull function).
Optimal values (from exhaustive optimization) are shown as dashed lines.
B
Decision accuracy
C
Decision time
20
0.3
0.2
0.1
0
15
reward, R
0.4
decision time, T
decision error, e
0.5
10
5
100
200
300
400
500
-0.2
8
-0.4
-0.6
trials, N
100
200
300
400
500
?1
?0
6
4
2
-1
0
Decision threshold
10
-0.8
0
0
D
Reward
0
thresholds
A
0
0
100
trials, N
200
300
trials, N
400
500
0
100
200
300
400
500
trials, N
Figure 3: Bayesian optimization of the two decision thresholds over a single learning episode. Other
details are the same as in Fig. 2, other than only 500 trials were used with smoothing over 20 trials.
4
4.1
Results
Single learning episode
The learning problem is to find the pair of optimal decision thresholds (?0? , ?1? ) that maximize the
reward function (5), which is a linear combination of penalties for delays and type I and II errors.
The reward function has two free parameters that affect the optimal thresholds: the costs W0 /c and
W1 /c of making type I and II errors relative to time. The methods apply generally, although for
concreteness we consider a drift-diffusion model equivalent to the SPRT with distribution means
?0 = ??1 = 1/3 and standard deviation ? = 1.
Both the REINFORCE method and Bayesian optimization can converge to approximations of the
optimal decision thresholds, as shown in Figures 2D,3D above for a typical learning episode. The
decision error e, decision time T and reward R are all highly variable from the stochastic nature of
the evidence, although displayed plots have their variance reduced by smoothing over 50 trials (to
help interpret the results). There is a gradual convergence towards near optimal decision performance.
Clearly the main difference between the REINFORCE method and the Bayesian optimization method
is the speed of convergence to the decision thresholds (c.f. Figures 2D vs 3D). REINFORCE gradually
converges over ?5000 trials whereas Bayesian optimization converges in . 500 trials. However,
there are other differences between the two methods that are only revealed for multiple learning
episodes, which act to balance the pros and cons across the two methods.
4.2
Multiple learning episodes: one decision threshold
For validation purposes, we reduce the learning problem to the simpler case where there is only
one decision threshold ?0 = ?1 , by setting costs equal for type I and II errors W0 /c = W1 /c so
that the error probabilities are equal ?0 = ?1 . This will allow us to compare the two methods
in a representative scenario that is simpler to visualize and can be validated against an exhaustive
optimization of the reward function (which takes too long to calculate for two thresholds).
5
Figure 4: REINFORCE learning of one decision threshold (for equal thresholds ?1 = ?0 ) over 200
learning episodes with costs c/W1 = c/W0 sampled uniformly from [0, 0.1]. Results are after 5000
learning trials (averaged over 100 trials). The mean and standard deviation of these results (red line
and shaded region) are compared with an exhaustive optimization over 106 episodes (blue curves).
Figure 5: Bayesian optimization of one decision threshold (for equal thresholds ?1 = ?0 ) over 200
learning episodes with costs c/W1 = c/W0 sampled uniformly from [0, 0.1]. Results are after 500
learning trials (averaged over 100 trials). The mean and standard deviation of these results (red line
and shaded region) are compared with an exhaustive optimization over 106 episodes (blue curves).
We consider REINFORCE over 5000 trials and Bayesian optimization over 500 trials, which are
sufficient for convergence (Figures 2,3). Costs were considered over a range W/c > 10 via random
uniform sampling of c/W over the range [0, 0.1]. Mean decision errors e, decision times T , rewards
and thresholds are averaged over the final 50 trials, combining the results for both choices.
Both the REINFORCE and Bayesian optimization methods estimate near-optimal decision thresholds
for all considered cost parameters (Figures 4,5; red curves) as verified from comparison with an
exhaustive search of the reward function (blue curves) over 106 decision trials (randomly sampling
the threshold range to estimate an average reward function, as in Fig 1B). In both cases, the exhaustive
search lies within one standard deviation of the decision threshold from the two learning methods.
There are, however, differences in performance between the two methods. Firstly, the variance of the
threshold estimates is greater for Bayesian optimization than for REINFORCE (c.f. Figures 4D vs
5D). The variance of the decision thresholds feeds through into larger variances for the decision error,
time and reward. Secondly, although Bayesian optimization converges in fewer trials (500 vs 5000),
it comes at the expense of greater computational cost of the algorithm (Table 1).
The above results were checked for robustness across reasonable ranges of the various metaparameters for each learning method. For REINFORCE, the results were not appreciably affected by
having any learning rate ? within the range 0.1-1; similarly, increasing the unit number n did not
affect the threshold variances, but scales the computation time.
4.3
Multiple learning episodes: two decision thresholds
We now consider the learning problem with two decision thresholds (?0 , ?1 ) that optimize the reward
function 5 with differing W0 /c and W1 /c values. We saw above that REINFORCE produces the
more accurate estimates relative to the computational cost, so we concentrate on that method only.
6
0.08
0.06
0.04
0.02
0.08
0.06
0.04
0.02
0
0
0
0.02 0.04 0.06 0.08 0.1
0.1 0.2 0.3 0.4
decision error , e
0
0.02 0.04 0.06 0.08 0.1
cost parameter, c/W 0
0.5
0.08
0.06
0.04
0.02
0
cost parameter, c/W 0
0
D
Reward
0.1
0
4
8
12 16
decision time, T
Decision threshold
0.1
1
C
Decision time
0.1
cost parameter, c/W
B
cost parameter, c/W 1
Decision accuracy
0.1
cost parameter, c/W 1
cost parameter, c/W 1
A
0.08
0.06
0.04
0.02
0
0
0.02 0.04 0.06 0.08 0.1
0
cost parameter, c/W 0
20
-0.5 -0.4 -0.3 -0.2 -0.1
reward, R
0.02 0.04 0.06 0.08 0.1
cost parameter, c/W 0
0
0
2
4
6
8
threshold, ? 1
10
Figure 6: Reinforcement learning of two decision thresholds. Method same as Figure 4 except that
2002 learning episodes are considered with costs (c/W0 , c/W1 ) sampled from [0, 0.1] ? [0, 0.1]. The
threshold ?0 results are just reflections of those for ?1 in the axis c/W0 ? c/W1 and thus not shown.
Table 1: Comparison of threshold learning methods. Results for one decision threshold, averaging
over the data in Figures 4,5. (Benchmarked on an i7 2.7GHz CPU.)
computation time
computation time/trial
uncertainty, ?? (1 s.d.)
REINFORCE
method
0.5 sec (5000 trials)
0.1 msec/trial
0.23
Bayesian
optimization
50 sec (500 trials)
100 msec/trial
0.75
Exhaustive
optimization
44 sec (106 trials)
0.04 msec/trial
0.01
The REINFORCE method can find the two decision thresholds (Figure 6), as demonstrated by
estimating the thresholds over 2002 instances of the reward function with (c/W0 , c/W1 ) sampled
uniformly from [0, 0.1]?[0, 0.1]. Because of the high compute time, we cannot compare the results
to those from an exhaustive search, apart from that the plot diagonals (W0 /c = W1 /c) reduce to the
single threshold results which matched an exhaustive optimization (Figure 4).
Figure 6 is of general interest because it maps the drift-diffusion model (SPRT) decision performance
over a main portion of its parameter space. Results for the two decision thresholds (?0 , ?1 ) are
reflections of each other about W0 ? W1 , while the decision error, time and reward are reflection
symmetric (consistent with these symmetries of the decision problem). All quantities depend on both
weight parameters (W0 /c, W1 /c) in a smooth but non-trivial manner. To our knowledge, this is the
first time the full decision performance has been mapped.
4.4
Comparison with animal learning
The relation between reward and decision optimality is directly relevant to the psychophysics of two
alternative forced choice tasks in the tradeoff between decision accuracy and speed [3]. Multiple
studies support that the decision threshold is set to maximize reward [7, 8, 9]. However, the mechanism
by which subjects learn the optimal thresholds has not been addressed. Our two learning methods are
candidate mechanisms, and thus should be compared with experiment.
We have found a couple of studies showing data over the acquisition phase of two-alternative forced
choice behavioural experiments: one for rodent whisker vibrotactile discrimination [19, Figure 4] and
the other for bat echoacoustic discrimination [20]. Studies detailing the acquisition phase are rare
compared to those of the proficient phase, even though they are a necessary component of all such
behavioural experiments (and successful studies rest on having a well-designed acquisition phase).
In both behavioural studies, the animals acquired proficient decision performance after 5000-10000
trials: in rodent, this was after 25-50 sessions of ?200 trials [19, Figure 4]; and in bat, after about
6000 trials for naive animals [20, Figure 4]. The typical progress of learning was to begin with
random choices (mean decision error e = 0.5) and then gradually converge on the appropriate balance
of decision time vs accuracy. There was considerable variance in final performance across different
animals (in rodent, mean decision errors were e ? 0.05-0.15).
7
That acquisition takes 5000 or more trials is consistent with the REINFORCE learning rule (Figure 2),
and not with Bayesian optimization (Figure 3). Moreover, the shape of the acquisition curve for the
REINFORCE method resembles that of the animal learning, in also having a good fit to a cumulative
Weibull function over a similar number of trials (red line, Figure 2). That being said, the animals begin
making random choices and gradually improve in accuracy with longer decision times, whereas both
artificial learning methods (Figures 2,3) begin with accurate choices and then decrease in accuracy
and decision time. Taken together, this evidence supports that the REINFORCE learning rule is a
plausible model of animal learning, although further theoretical and experimental study is required.
5
Discussion
We examined how to learn decision thresholds in the drift-diffusion model of perceptual decision
making. A key step was to use single trial rewards derived from Wald?s trial-averaged cost function
for the equivalent sequential probability ratio test, which took the simple form of a linear weighting of
penalties due to time and type I/II errors. These highly stochastic rewards are challenging to optimize,
which we addressed with two distinct methods to learn the decision thresholds.
The first approach for learning the thresholds was based on a method for training neural networks
known as Williams? REINFORCE rule [11]. In modern terminology, this can be viewed as a
policy gradient method [16, 17] and here we proposed an appropriate policy for optimal decision
making. The second method was a modern Bayesian optimization method that samples and builds
a probabilistic model of the reward function to guide further sampling [12, 13, 14]. Both learning
methods converged to nearby the optimum decision thresholds, as validated against an exhaustive
optimization (over 106 trials). The Bayesian optimization method converged much faster (?500
trials) than the REINFORCE method (?5000 trials). However, Bayesian optimization is three-times
as variable in the threshold estimates and 40-times slower in computation time. It appears that the
faster convergence for Bayesian optimization leads to less averaging over the stochastic rewards, and
hence greater variance than with the REINFORCE method.
We expect that both the REINFORCE and Bayesian optimization methods used here can be improved
to compensate for some of their individual drawbacks. For example, the full REINFORCE learning
rule has a third factor corresponding to the neural network input, which could represent a context
signal to allow recall and generalization over past learnt thresholds; also, information on past trial
performance is discarded by REINFORCE, which could be partially retained to improve learning.
Bayesian optimization could be improved in computational speed by updating the Gaussian process
with just the new samples after each decision, rather than refitting the entire Gaussian process; also,
the variance of the threshold estimates may improve with other choices of acquisition function for
sampling the rewards or other assumptions for the Gaussian process covariance function. In addition,
the optimization methods may have broader applicability when the optimal decision thresholds vary
with time [10], such as tasks with deadlines or when there are multiple (three or more) choices.
Several more factors support the REINFORCE method as a model of reward-driven learning during
perceptual decision making. First, REINFORCE is based on a neural network and is thus better
suited as a connectionist model of brain function. Second, the REINFORCE model results (Fig. 2)
resemble acquisition data from behavioural experiments in rodent [19] and bat [20] (Sec. 4.4). Third,
the site of reward learning would plausibly be the basal ganglia, and a similar 3-factor learning rule
has already been used to model cortico-striatal plasticity [21]. In addition, multi-alternative (MSPRT)
versions of the drift-diffusion model offer a model of action selection in the basal ganglia [22, 23],
and so the present REINFORCE model of decision acquisition would extend naturally to encompass
a combined model of reinforcement learning and optimal decision making in the brain.
Acknowledgements
I thank Jack Crago, John Lloyd, Kirsty Aquilina, Kevin Gurney and Giovanni Pezzulo for discussions
related to this research. The code used to generate the results and figures for this paper is at
http://lepora.com/publications.htm
8
References
[1] R. Ratcliff. A theory of memory retrieval. Psychological Review, 85:59?108, 1978.
[2] J. Gold and M. Shadlen. The neural basis of decision making. Annu. Rev. Neurosci., 30:535?574, 2007.
[3] R. Bogacz, E. Brown, J. Moehlis, P. Holmes, and J.D. Cohen. The physics of optimal decision making: A
formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review,
113(4):700, 2006.
[4] A. Wald and J. Wolfowitz. Optimum character of the sequential probability ratio test. The Annals of
Mathematical Statistics, 19(3):326?339, 1948.
[5] J. Gold and M. Shadlen. Banburismus and the brain: decoding the relationship between sensory stimuli,
decisions, and reward. Neuron, 36(2):299?308, 2002.
[6] P. Simen, J. Cohen, and P. Holmes. Rapid decision threshold modulation by reward rate in a neural network.
Neural networks, 19(8):1013?1026, 2006.
[7] P. Simen, D. Contreras, C. Buck, P. Hu, and J. Holmes, P.and Cohen. Reward rate optimization in twoalternative decision making: empirical tests of theoretical predictions. Journal of Experimental Psychology:
Human Perception and Performance, 35(6):1865, 2009.
[8] R. Bogacz, P. Hu, P. Holmes, and J. Cohen. Do humans produce the speed?accuracy trade-off that
maximizes reward rate? The Quarterly Journal of Experimental Psychology, 63(5):863?891, 2010.
[9] F. Balci, P. Simen, R. Niyogi, A. Saxe, J. Hughes, P. Holmes, and J. Cohen. Acquisition of decision making
criteria: reward rate ultimately beats accuracy. Attention, Perception, & Psychophysics, 73(2):640?657,
2011.
[10] J. Drugowitsch, R. Moreno-Bote, A. Churchland, M. Shadlen, and A. Pouget. The cost of accumulating
evidence in perceptual decision making. The Journal of Neuroscience, 32(11):3612?3628, 2012.
[11] R. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine learning, 8(3-4):229?256, 1992.
[12] M. Pelikan. Bayesian optimization algorithm. In Hierarchical Bayesian optimization algorithm, pages
31?48. Springer, 2005.
[13] E. Brochu, V. Cora, and N. De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint
arXiv:1012.2599, 2010.
[14] J. Snoek, H. Larochelle, and R. Adams. Practical bayesian optimization of machine learning algorithms.
In Advances in neural information processing systems, pages 2951?2959, 2012.
[15] R. Ratcliff and G. McKoon. The diffusion decision model: theory and data for two-choice decision tasks.
Neural computation, 20(4):873?922, 2008.
[16] J. Peters and S. Schaal. Reinforcement learning of motor skills with policy gradients. Neural networks,
21(4):682?697, 2008.
[17] R. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning
with function approximation. In Neural Information Processing Systems 12, pages 1057?1063, 2000.
[18] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. the MIT Press, 2006.
[19] J. Mayrhofer, V. Skreb, W. von der Behrens, S. Musall, B. Weber, and F. Haiss. Novel two-alternative
forced choice paradigm for bilateral vibrotactile whisker frequency discrimination in head-fixed mice and
rats. Journal of neurophysiology, 109(1):273?284, 2013.
[20] K. Stich and Y. Winter. Lack of generalization of object discrimination between spatial contexts by a bat.
Journal of experimental biology, 209(23):4802?4808, 2006.
[21] M. Frank and E. Claus. Anatomy of a decision: striato-orbitofrontal interactions in reinforcement learning,
decision making, and reversal. Psychological review, 113(2):300, 2006.
[22] R. Bogacz and K. Gurney. The basal ganglia and cortex implement optimal decision making between
alternative actions. Neural computation, 19(2):442?477, 2007.
[23] N. Lepora and K. Gurney. The basal ganglia optimize decision making over general perceptual hypotheses.
Neural Computation, 24(11):2924?2945, 2012.
9
| 6494 |@word neurophysiology:1 trial:58 exploitation:1 version:1 hu:2 simulation:1 gradual:1 covariance:2 recursively:1 initial:1 substitution:1 selecting:1 past:2 freitas:1 com:1 written:1 must:1 john:1 plasticity:1 shape:1 motor:2 moreno:1 plot:3 designed:1 update:2 v:4 discrimination:4 fewer:3 beginning:1 proficient:2 lr:3 provides:1 equi:1 firstly:1 simpler:2 mathematical:1 dn:3 incorrect:3 combine:2 fitting:1 manner:1 introduce:1 acquired:1 snoek:1 expected:3 rapid:1 examine:1 multi:2 brain:3 decreasing:1 cpu:1 armed:1 increasing:3 spain:1 estimating:1 matched:1 begin:3 maximizes:2 moreover:1 bogacz:3 benchmarked:1 minimizes:2 weibull:2 recruit:1 proposing:1 differing:1 finding:1 act:1 scaled:1 uk:2 unit:7 engineering:1 sutton:1 accumulates:1 encoding:1 modulation:1 resembles:2 examined:1 specifying:1 challenging:5 shaded:2 range:6 averaged:7 bat:4 unique:2 practical:1 yj:9 hughes:1 regret:1 implement:1 empirical:1 cannot:1 selection:1 risk:4 applying:1 context:2 accumulating:1 accumulation:2 optimize:8 map:2 equivalent:3 demonstrated:1 maximizing:3 williams:7 attention:1 starting:1 pouget:1 rule:20 holmes:5 increment:2 annals:1 controlling:1 trigger:1 behrens:1 user:1 losing:1 us:1 hypothesis:6 expensive:2 particularly:1 updating:1 preprint:1 calculate:1 wj:8 region:3 episode:12 trade:2 decrease:1 reward:67 hri:1 lepora:4 ultimately:1 depend:1 singh:1 churchland:1 f2:2 basis:1 htm:1 chapter:1 represented:1 various:1 forced:7 distinct:4 effective:1 artificial:1 kevin:1 outcome:3 h0:9 exhaustive:11 choosing:1 larger:1 plausible:1 statistic:1 niyogi:1 gp:1 noisy:1 final:2 sequence:1 took:1 interaction:1 relevant:1 combining:1 gold:2 convergence:4 optimum:2 produce:2 adam:1 converges:5 object:1 help:1 ac:1 progress:1 resemble:1 come:1 larochelle:1 concentrate:1 anatomy:1 drawback:1 correct:2 stochastic:11 exploration:2 human:3 saxe:1 mckoon:1 mcallester:1 behaviour:2 generalization:3 biological:1 secondly:1 considered:6 normal:2 exp:1 visualize:1 optimizer:1 vary:1 purpose:1 propensity:1 saw:1 appreciably:1 successfully:1 cora:1 mit:1 clearly:1 gaussian:10 modified:1 reaching:2 rather:1 pn:1 broader:1 publication:1 gpml:1 derived:6 validated:3 schaal:1 improvement:2 modelling:1 ratcliff:3 likelihood:3 stopping:3 accumulated:2 integrated:1 typically:1 entire:1 crago:1 bandit:2 relation:1 metaparameters:1 overall:2 among:1 augment:1 animal:12 smoothing:2 special:2 psychophysics:3 summed:1 art:1 equal:10 spatial:1 having:3 sampling:7 biology:1 represents:1 broad:2 afc:1 future:2 connectionist:2 stimulus:1 modern:3 randomly:1 winter:1 individual:1 phase:5 argmax:2 maintain:1 interest:1 highly:5 accurate:2 closer:1 necessary:1 moehlis:1 detailing:1 e0:3 theoretical:2 minimal:1 fitted:1 psychological:4 instance:1 formalism:2 modeling:2 cost:30 applicability:1 deviation:4 rare:1 uniform:2 delay:1 successful:2 too:1 optimally:1 learnt:1 combined:1 confident:1 peak:1 refitting:1 probabilistic:2 off:2 physic:1 decoding:1 together:1 continuously:1 mouse:1 w1:16 squared:1 von:1 return:1 account:1 de:1 sec:4 lloyd:1 coefficient:3 bilateral:1 view:1 h1:9 red:6 competitive:2 bayes:5 portion:1 contribution:1 accuracy:11 variance:13 correspond:1 modelled:2 bayesian:31 bristol:2 history:1 converged:2 checked:1 against:3 nonetheless:1 acquisition:14 colleague:1 frequency:2 naturally:1 con:1 couple:1 stop:1 sampled:6 recall:2 knowledge:1 sophisticated:1 brochu:1 appears:1 feed:1 simen:3 improved:2 though:1 generality:1 just:2 until:2 gurney:3 hand:1 nonlinear:1 marker:1 lack:1 defines:1 logistic:1 building:1 brown:1 evolution:1 hence:2 symmetric:1 iteratively:1 during:1 coincides:1 rat:1 criterion:2 bote:1 performs:1 reflection:3 pro:1 weber:1 jack:1 novel:2 cohen:5 defeat:1 extend:1 interpret:1 tuning:1 mathematics:1 similarly:1 session:1 longer:1 cortex:1 posterior:2 perspective:1 optimizing:2 optimizes:1 apart:1 driven:1 scenario:1 contreras:1 discretizing:1 binary:4 der:1 seen:1 minimum:1 greater:4 recognized:2 converge:2 maximize:4 wolfowitz:1 monotonically:2 dashed:1 ii:8 signal:1 full:4 multiple:5 encompass:1 paradigm:1 smooth:1 faster:2 cross:2 long:1 compensate:1 offer:1 retrieval:1 deadline:1 e1:3 converging:1 variant:1 wald:5 regression:2 basic:1 prediction:1 arxiv:2 represent:1 background:1 whereas:2 addition:2 addressed:2 rest:1 pass:1 ascent:2 ineffective:1 supposing:1 subject:1 claus:1 integer:1 near:2 revealed:1 affect:2 fit:2 psychology:3 reduce:2 tradeoff:1 i7:1 whether:2 expression:1 penalty:3 peter:1 action:5 matlab:1 pelikan:1 generally:3 buck:1 tune:2 ddm:2 reduced:1 generate:1 http:1 tutorial:1 dotted:1 neuroscience:3 estimated:1 per:1 blue:3 affected:1 express:1 basal:5 key:1 terminology:1 threshold:100 drawn:1 verified:1 diffusion:11 concreteness:1 sum:2 parameterized:1 uncertainty:3 reasonable:1 decision:122 orbitofrontal:1 bound:1 ct:3 flat:1 nearby:1 nathan:1 speed:5 optimality:3 department:1 according:3 combination:2 across:4 character:1 rev:1 making:24 gradually:3 pr:5 gathering:2 taken:2 computationally:2 equation:3 behavioural:4 remains:1 previously:1 fail:1 mechanism:2 end:1 reversal:1 parametrize:1 gaussians:1 apply:1 quarterly:1 hierarchical:2 appropriate:3 subtracted:1 alternative:9 batch:1 robustness:1 slower:2 original:1 include:1 maintaining:1 plausibly:1 build:1 already:1 quantity:2 strategy:1 dependence:1 diagonal:1 said:1 gradient:9 thank:1 reinforce:39 mapped:1 parametrized:1 w0:16 collected:1 consensus:1 trivial:2 banburismus:1 maximising:1 code:1 retained:1 relationship:1 ratio:8 balance:3 minimizing:1 statement:1 striatal:1 debate:1 expense:1 frank:1 negative:1 sprt:8 policy:11 unknown:2 neuron:1 discarded:1 descent:1 displayed:1 beat:1 head:1 mansour:1 smoothed:1 drift:10 pair:4 required:1 toolbox:1 unequal:1 barcelona:1 nip:1 address:2 usually:1 perception:2 challenge:1 max:3 including:1 memory:1 power:1 greatest:1 natural:1 representing:1 improve:3 axis:1 conventionally:1 naive:1 sn:1 prior:2 review:3 acknowledgement:1 relative:2 whisker:2 expect:1 proportional:1 filtering:1 triple:1 validation:2 sufficient:1 consistent:2 shadlen:3 principle:2 supported:1 free:1 rasmussen:1 bias:1 side:1 guide:4 allow:2 cortico:1 formal:1 benefit:1 distributed:2 curve:7 ghz:1 giovanni:1 cumulative:4 drugowitsch:1 sensory:4 commonly:1 collection:1 reinforcement:12 made:4 constituting:1 sj:3 skill:1 active:1 assumed:4 xi:1 continuous:3 iterative:1 search:3 table:2 learn:6 nature:1 symmetry:2 improving:1 constructing:1 did:1 main:2 neurosci:1 noise:2 hyperparameters:2 x1:1 fig:4 representative:1 site:1 aid:1 n:5 msec:3 exponential:4 lie:1 candidate:1 perceptual:5 weighting:1 third:2 z0:1 annu:1 showing:1 virtue:1 evidence:10 sequential:6 rodent:4 suited:2 entropy:1 simply:1 ganglion:5 expressed:1 partially:1 springer:1 determines:1 quasinewton:1 chance:1 viewed:2 towards:3 considerable:1 change:1 stich:1 determined:1 typical:2 uniformly:3 except:1 averaging:4 experimental:5 formally:1 support:3 modulated:2 |
6,074 | 6,495 | Sorting out typicality with the inverse moment matrix
SOS polynomial
Jean-Bernard Lasserre
LAAS-CNRS & IMT
Universit? de Toulouse
31400 Toulouse, France
lasserre@laas.fr
Edouard Pauwels
IRIT & IMT
Universit? Toulouse 3 Paul Sabatier
31400 Toulouse, France
edouard.pauwels@irit.fr
Abstract
We study a surprising phenomenon related to the representation of a cloud of data
points using polynomials. We start with the previously unnoticed empirical observation that, given a collection (a cloud) of data points, the sublevel sets of a certain
distinguished polynomial capture the shape of the cloud very accurately. This
distinguished polynomial is a sum-of-squares (SOS) derived in a simple manner
from the inverse of the empirical moment matrix. In fact, this SOS polynomial is
directly related to orthogonal polynomials and the Christoffel function. This allows
to generalize and interpret extremality properties of orthogonal polynomials and to
provide a mathematical rationale for the observed phenomenon. Among diverse
potential applications, we illustrate the relevance of our results on a network intrusion detection task for which we obtain performances similar to existing dedicated
methods reported in the literature.
1
Introduction
Capturing and summarizing the global shape of a cloud of points is at the heart of many data
processing applications such as novelty detection, outlier detection as well as related unsupervised
learning tasks such as clustering and density estimation. One of the main difficulties is to account
for potentially complicated shapes in multidimensional spaces, or equivalently to account for non
standard dependence relations between variables. Such relations become critical in applications, for
example in fraud detection where a fraudulent action may be the dishonest combination of several
actions, each of them being reasonable when considered on their own.
Accounting for complicated shapes is also related to computational geometry and nonlinear algebra
applications, for example integral computation [11] and reconstruction of sets from moments data
[6, 7, 12]. Some of these problems have connections and potential applications in machine learning.
The work presented in this paper brings together ideas from both disciplines, leading to a method
which allows to encode in a simple manner the global shape and spatial concentration of points within
a cloud.
We start with a surprising (and apparently unnoticed) empirical observation. Given a collection of
points, one may build up a distinguished sum-of-squares (SOS) polynomial whose coefficients (or
Gram matrix) is the inverse of the empirical moment matrix (see Section 3). Its degree depends on
how many moments are considered, a choice left to the user. Remarkably its sublevel sets capture
much of the global shape of the cloud as illustrated in Figure 3. This phenomenon is not incidental as
illustrated in many additional examples in Appendix A. To the best of our knowledge, this observation
has remained unnoticed and the purpose of this paper is to report this empirical finding to the machine
learning community and provide first elements toward a mathematical understanding as well as
potential machine learning applications.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
146370
67380
30670
30
67
67
38
0
0
14
63
70
35
13810
6860
3570
1850
950
460
340
210
333
80
10
13
70
18
50
95
81
0
0 68
60
333340
30
15
138
10
357
0
6860
185
0
950
Figure 1: Left: 1000 points in R2 and the levelsets of the corresponding inverse moment matrix SOS
polynomial Q?,d (d = 4). The level set p+d
d , which corresponds to the average value of Q?,d , is
represented in red. Right: 1040 points in R2 with size and color proportional to the value of inverse
moment matrix SOS polynomial Q?,d (d = 8).
The proposed method is based on the computation of the coefficients of a very specific polynomial
which depends solely on the empirical moments associated with the data points. From a practical
perspective, this can be done via a single pass through the data, or even in an online fashion via
a sequence of efficient Woodbury updates. Furthermore the computational cost of evaluating the
polynomial does not depend on the number of data points which is a crucial difference with existing
nonparametric methods such as nearest neighbors or kernel based methods [3]. On the other hand,
this computation requires the inversion of a matrix whose size depends on the dimension of the
problem (see Section 3). Therefore, the proposed framework is suited for moderate dimensions and
potentially very large number of observations.
In Section 4 we first describe an affine invariance result which suggests that the distinguished SOS
polynomial captures very intrinsic properties of clouds of points. In a second step, we provide a
mathematical interpretation that supports our empirical findings based on connections with orthogonal
polynomials [5]. We propose a generalization of a well known extremality result for orthogonal
univariate polynomials on the real line (or the complex plane) [16, Theorem 3.1.2]. As a consequence,
the distinguished SOS polynomial of interest in this paper is understood as the unique optimal
solution of a convex optimization problem: minimizing an average value over a structured set of
positive polynomials. In addition, we revisit [16, Theorem 3.5.6] about the Christoffel function.
The mathematics behind provide a simple and intuitive explanation for the phenomenon that we
empirically observed.
Finally, in Section 5 we perform numerical experiments on KDD cup network intrusion dataset
[13]. Evaluation of the distinguished SOS polynomial provides a score that we use as a measure of
outlyingness to detect network intrusions (assuming that they correspond to outlier observations).
We refer the reader to [3] for a discussion of available methods for this task. For the sake of a
fair comparison we have reproduced the experiments performed in [18] for the same dataset. We
report results similar to (and sometimes better than) those described in [18] which suggests that the
method is comparable to other dedicated approaches for network intrusion detection, including robust
estimation and Mahalanobis distance [8, 10], mixture models [14] and recurrent neural networks
[18].
2
Multivariate polynomials, moments and sums of squares
Notations: We fix the ambient dimension to be p throughout the text. For example, we will
manipulate vectors in Rp as well as p-variate polynomials with real coefficients. We denote by X a
set of p variables X1 , . . . , Xp which we will use in mathematical expressions defining polynomials.
We identify monomials from the canonical basis of p-variate polynomials with their exponents in
?p
?1 ?2
p
?
Np : we associate
Pp to ? = (?i )i=1...p ? N the monomial X := X1 X2 . . . Xp which degree is
deg(?) := i=1 ?i . We use the expressions <gl and ?gl to denote the graded lexicographic order,
a well ordering over p-variate monomials. This amounts to, first, use the canonical order on the
2
degree and, second, break ties in monomials with the same degree using the lexicographic order with
X1 = a, X2 = b . . . For example, the monomials in two variables X1 , X2 , of degree less or equal to
3 listed in this order are given by: 1, X1 , X2 , X12 , X1 X2 , X22 , X13 , X12 X2 , X1 X22 , X23 .
We denote by Npd , the set {? ? Np ; deg(?) ? d} ordered by ?gl . R[X] denotes the set of p-variate
polynomials: linear combinations of monomials with real coefficients. The degree of a polynomial is
the highest of the degrees of its monomials with nonzero coefficients1 . We use the same notation,
deg(?), to denote the degree of a polynomial or of an element of Np . For d ? N, Rd [X] denotes
the set of p-variate polynomials of degree less or equal to d. We set s(d) = p+d
d , the number of
monomials of degree less or equal to d. We will denote by vd (X) the vector of monomials of degree
less or equal to d sorted by ?gl . We let vd (X) := (X ? )??Np ? Rd [X]s(d) . With this notation,
d
we can write a polynomial P ? Rd [X] as follows P (X) = hp, vd (X)i for some real vector of
coefficients p = (p? )??Np ? Rs(d) ordered using ?gl . Given x = (xi )i=1...p ? Rp , P (x) denotes
d
the evaluation of P with the assignments X1 = x1 , X2 = x2 , . . . Xp =R xp . Given a Borel probability
measure ? and ? ? Np , y? (?) denotes the moment ? of ?: y? (?) = Rp x? d?(x). Throughout the
paper, we will only consider measures of which all moments are finite.
Moment matrix: Given a Borel probability measure ? on Rp , the moment matrix of ?, Md (?), is a
matrix indexed by monomials of degree at most d ordered by ?gl . For ?, ? ? Npd , the corresponding
entry in Md (?) is defined by Md (?)?,? := y?+? (?), the moment ? + ? of ?. When p = 2, letting
y? = y? (?) for ? ? N24 , we have
M2 (?) :
1
X1
X2
X12
X1 X2
X22
1
X1
X2
X12
X1 X2
1
y10
y01
y20
y11
y02
y10
y20
y11
y30
y21
y12
y01
y11
y02
y21
y12
y03
y20
y30
y21
y40
y31
y22
y11
y21
y12
y31
y22
y13
X22
y02
y12
y03 .
y22
y13
y04
Md (?) is positive semidefinite for all d ? N. Indeed, for any Rp ? Rs(d) , let P ? Rd [X] be the
polynomial with vector of coefficients
p, we have pT Md (?)p = Rp P 2 (x)d?(x) ? 0. Furthermore,
R
we have the identity Md (?) = Rp vd (x)vd (x)T d?(x) where the integral is understood elementwise.
Sum of squares (SOS): We denote by ?[X] ? R[X] (resp. ?d [X] ? Rd [X]), the set of polynomials (resp. polynomials of degree at most d) which can be written as a sum of squares of polynomials.
Let P ? R2m [X] for some m ? N, then P belongs to ?2m [X] P
if there exists a finite J ? N and
a family of polynomials Pj ? Rm [X], j ? J, such that P = j?J Pj2 . It is obvious that sum
of squares polynomials are always nonnegative. A further interesting property is that this class of
polynomials is connected with positive semidefiniteness. Indeed, P belongs to ?2m [X] if and only if
?Q ? Rs(m)?s(m) , Q 0, P (x) = vd (x)T Qvd (x), ?x ? Rp .
As a consequence, every positive semidefinite matrix Q ? R
?2m [X] by using the representation in (1).
3
s(m)?s(m)
(1)
defines a polynomial in
Empirical observations on the inverse moment matrix SOS polynomial
The inverse moment-matrix SOS polynomial is associated to a measure ? which satisfies the following.
Assumption 1 ? is a Borel probability measure on Rp with all its moments finite and Md (?) is
positive definite for a given d ? N.
Definition 1 Let ?, d satisfy Assumption 1. We call the SOS polynomial Q?,d ? ?2d [X] defined by
the application:
x 7?
Q?,d (x) := vd (x)T Md (?)?1 vd (x),
1
x ? Rp ,
(2)
For the null polynomial, we use the convention that its degree is 0 and it is ?gl smaller than all other
monomials.
3
the inverse moment-matrix SOS polynomial of degree 2d associated to ?.
Actually, connection to orthogonal polynomials will show that the inverse function x 7? Q?,d (x)?1
is called the Christoffel function in the literature [16, 5] (see also Section 4).
In the remainder of this section, we focus on the situation when ? corresponds to an empirical
measureP
over n points in Rp which are fixed. So let x1 , . . . , xn ? Rp be a fixed set of points and let
n
1
? := n i=1 ?xi where ?x corresponds to the Dirac measure at x. In such a case the polynomial
Q?,d in (2) is determined only by the empirical moments up to degree 2d of our collection of points.
Note that we also require that Md (?) 0. In other words, the points x1 , . . . , xn do not belong to
an algebraic set defined by a polynomial of degree less or equal to d. We first describe empirical
properties of inverse moment matrix SOS polynomial in this context of empirical measures. A
mathematical intuition and further properties behind these observations are developped in Section 4.
3.1
Sublevel sets
The starting point of our investigations is the following phenomenon which to the best of our
knowledge has remained unnoticed in the literature. For the sake of clarity and simplicity we provide
an illustration in the plane. Consider the following experiment in R2 for a fixed d ? N: represent on
the same graphic, the cloud of points {xi }i=1...n and the sublevel sets of SOS polynomial Q?,d in
R2 (equivalently, the superlevel sets of the Christoffel function). This is illustrated in the left panel
of Figure 3. The collection of points consists of 500 simulations of two different Gaussians and the
value of d is 4. The striking feature of this plot is that the level sets capture the global shape of the
cloud of points quite accurately. In particular, the level set {x : Q?,d (x) ? p+d
d } captures most of
the points. We could reproduce very similar observations on different shapes with various number of
points in R2 and degree d (see Appendix A).
3.2
Measuring outlyingness
An additional remark in a similar line is that Q?,d tends to take higher values on points which are
isolated from other points. Indeed in the left panel of Figure 3, the value of the polynomial tends to
be smaller on the boundary of the cloud. This extends to situations where the collection of points
correspond to shape with a high density of points with a few additional outliers. We reproduce a
similar experiment on the right panel of Figure 3. In this example, 1000 points are sampled close to a
ring shape and 40 additional points are sampled uniformly on a larger square. We do not represent
the sublevel sets of Q?,d here. Instead, the color and shape of the points are taken proportionally to
the value of Q?,d , with d = 8.
First, the results confirm the observation of the previous paragraph, points that fall close to the ring
shape tend to be smaller and points on the boundary of the ring shape are larger. Second, there is a
clear increase in the size of the points that are relatively far away from the ring shape. This highlight
the fact that Q?,d tends to take higher value in less populated areas of the space.
3.3
Relation to maximum likelihood estimation
If we fix d = 1, we recover the maximum
Pn likelihood estimation
Pn for the Gaussian, up to a constant
additive factor. To see this, set ? = n1 i=1 xi and S = n1 i=1 xi xTi . With this notation, we have
the following block representation of the moment matrix,
1 + ?T V ?1 ? ??T V ?1
1 ?T
?1
Md (?) =
Md (?) =
,
? S
?V ?1 ?
V ?1
where V = S ? ??T is the empirical covariance matrix and the expression for the inverse is given by
Schur complement. In this case, we have Q?,1 (x) = 1 + (x ? ?)T V ?1 (x ? ?) for all x ? Rp . We
recognize the quadratic form that appears in the density function of the multivariate Gaussian with
parameters estimated by maximum likelihood. This suggests a connection between the inverse SOS
moment polynomial and maximum likelihood estimation. Unfortunately, this connection is difficult
to generalize for higher values of d and we do not pursue the idea of interpreting the empirical
observations of this section through the prism of maximum likelihood estimation and leave it for
further research. Instead, we propose an alternative view in Section 4.
4
3.4
Computational aspects
Recall that s(d) = p+d
is the number of p-variate monomials of degree up to d. The computation
d
of Q?,d requires O(ns(d)2 ) operations for the computation of the moment matrix and O(s(d)3 )
operations for the matrix inversion. The evaluation of Q?,d requires O(s(d)2 ) operations.
Estimating the coefficients of Q?,d has a computational cost that depends only linearly in the number
of points n. The cost of evaluating Q?,d is constant with respect to the number of points n. This is
an important contrast with kernel based or distance based methods (such as nearest neighbors and
one class SVM) for density estimation or outlier detection since they usually require at least O(n2 )
operations for the evaluation of the model [3]. Moreover, this is well suited for online settings where
inverse moment matrix computation can be done using rank one Woodbury updates [15, Section
2.7.1].
The dependence in the dimension p is of the order of pd for a fixed d. Similarly, the dependence in d
is of the order of dp for a fixed dimension p and the joint dependence is exponential. Furthermore,
Md (?) has a Hankel structure which is known to produce ill conditioned matrices. This suggests
that the direct computation and evaluation of Q?,d will mostly make sense for moderate dimensions
and degree d. In our experiments, for large d, the evaluation of Q?,d remains quite stable, but the
inversion leads to numerical error for higher values (around 20).
4
Invariance and interpretation through orthogonal polynomials
The purpose of this section is to provide a mathematical rationale that explains the empirical observations made in Section 3. All the proofs are postponed to Appendix B. We fix a Borel probability
measure ? on Rp which satisfies Assumption 1. Note that Md (?) is always positive definite if ?
is not supported on the zero set of a polynomial of degree at most d. Under Assumption 1, Md (?)
induces an inner product on Rs(d) and by extension on Rd [X] (see Section 2). This inner product is
denoted by h?, ?i? and satisfies for any polynomials P, Q ? Rd [X] with coefficients p, q ? Rs(d) ,
Z
hP, Qi? := hp, Md (?)qiRs(d) =
P (x)Q(x)d?(x).
Rp
We will also use the canonical inner product over Rd [X] which we write hP, QiRd [X] := hp, qiRs(d)
for any polynomials P, Q ? Rd [X] with coefficients p, q ? Rs(d) . We will omit the subscripts for
this canonical inner product and use h?, ?i for both products.
4.1
Affine invariance
It is worth noticing that the mapping x 7? Q?,d (x) does not depend on the particular choice of vd (X)
as a basis of Rd [X], any other basis would lead to the same mapping. This leads to the result that
Q?,d captures affine invariant properties of ?.
Lemma 1 Let ? satisfy Assumption 1 and A ? Rp?p , b ? Rp define an invertible affine mapping on
Rp , A : x ? Ax+b. Then, the push foward measure, defined by ?
?(S) = ?(A?1 (S)) for all Borel sets
S ? Rp , satisfies Assumption 1 (with the same d as ?) and for all x ? Rp , Q?,d (x) = Q??,d (Ax + b).
Pn
Lemma 1 is probably
better understood when ? = 1/n i=1 ?xi as in Section 3. In this case, we
Pn
have ?
? = 1/n i=1 ?Axi +b and Lemma 1 asserts that the level sets of Q??,d are simply the images of
those of Q?,d under the affine transformation x 7? Ax + b. This is illustrated in Appendix D.
4.2
Connection with orthogonal polynomials
We define a classical [16, 5] family of orthonormal polynomials, {P? }??Np ordered according to ?gl
d
which satisfies for all ? ? Npd
hP? , X ? i = 0 if ? <gl ?, hP? , P? i? = 1, hP? , X ? i? = 0 if ? <gl ?, hP? , X ? i? > 0.
(3)
It follows from (3) that hP? , P? i? = 0 if ? 6= ?. Existence and uniqueness of such a family is
guaranteed by the Gram-Schmidt orthonormalization process following the ?gl order, and by the
5
positivity of the moment matrix, see for instance [5, Theorem 3.1.11]. There exist determinantal
formulae [9] and more precise description can be made for measures which have additional geometric
properties, see [5] for many examples.
Let Dd (?) be the lower triangular matrix whose rows are the coefficients of the polynomials P?
defined in (3) ordered by ?gl . It can be shown that Dd (?) = Ld (?)?T , where Ld (?) is the Cholesky
factorization of Md (?). Furthermore, there is a direct relation with the inverse moment matrix as
Md (?)?1 = Dd (?)T Dd (?) [9, Proof of Theorem 3.1]. This has the following consequence.
P
Lemma 2 Let ? satisfy Assumption 1, then Q?,d =
P?2 , where the family {P? }??Np is
??Np
d
d
R
defined by (3) and Rp Q?,d (x)d?(x) = s(d).
That is, Q?,d is a very specific and distinguished SOS polynomial, the sum of squares of the
orthonormal basis elements {P? }??Np of Rd (X) (w.r.t. ?). Furthermore, the average value of Q?,d
d
with respect to ? is s(d) which corresponds to the red level set in left panel of Figure 3.
4.3
A variational formulation for the inverse moment matrix SOS polynomial
In this section, we show that the family of polynomials {P? }??Np defined in (3) is the unique
d
solution (up to a multiplicative constant) of a convex optimization problem over polynomials. This
fact combined with Lemma 2 provides a mathematical rationale for the empirical observations
outlined in Section 3. Consider the following optimization problem.
Z X
1
min p
Q? (x)2 d?(x)
(4)
Q? ,?? ,??Nd 2 Rp
p
??Nd
X
s.t. q?? ? exp(?? ), q?? = 0, ?, ? ? Npd , ? <gl ?,
?? = 0,
??Np
d
P
where Q? (x) = ??Np q?? x? is a polynomial and ?? is a real variable for each ? ? Npd . We first
d
P
comment on problem (4). Let P = ??Np Q2? be the SOS polynomial appearing in the objective
d
function of (4). The objective of (4) simply involves the average value of P with respect to ?. Let
Sd ? ?d [X] be the set of such SOS polynomials P which have a sum of square decomposition
satisfying the constraints of (4) (for some arbitrary value of the real variables {?? }??Np ). With this
d
R
notation, problem (4) has the simple formulation minP ?Sd 21 P d?.
Based on this formulation, problem (4) can be interpreted as balancing two antagonist targets. On one
hand the minimization of the average value of the SOS polynomial P with respect to ?, on the other
hand the avoidance of the trivial polynomial, enforced by the constraint that P ? Sd . The constraint
P ? Sd is simple and natural. It ensures that P is a sum of squares of polynomials {Q? }??Np , where
d
the leading term of each Q? (according to the ordering ?gl ) is q?? x? with q?? > 0 (and hence
does not vanish). Inversely, using Cholesky factorization, for any SOS polynomial Q of degree 2d
which coefficient matrix (see equation (1)) is positive definite, there exists a > 0 such that aQ ? Sd .
This suggests that Sd is a quite general class of nonvanishing SOS polynomials. The following result,
which gives a relation between Q?,d and solutions of (4), uses a generalization of [16, Theorem 3.1.2]
to several orthogonal polynomials of several variables.
Theorem 1 : Under Assumption 1, problem (4) is?a convex optimization problem with a unique
?
optimal solution (Q?? , ??? ), which satisfies Q
Npd , for some ? > 0. In particular,
?, ? ?
P? = ?P
P
the distinguished SOS polynomial Q?,d =
P?2 = ?1 ??Np (Q?? )2 , is (part of) the unique
??Np
d
d
optimal solution of (4).
Theorem 1 states that up to the scaling factor ?, the distinguished SOS polynomial Q?,d is the
unique optimal solution of problem (4). A detailed proof is provided in the Appendix B and
we only sketch the main ideas here. First, it is remarkable that for each fixed ? ? Npd (and
again up
? is the uniqueo optimal solution of the problem:
n Rto a scaling factor) the polynomial PP
2
?
minQ
Q d? : Q ? Rd [X], Q(x) = x + ?<gl ? q? x? . This fact is well-known in the
univariate case [16, Theorem 3.1.2] and does not seem to have been exploited in the literature, at
6
least for purposes similar to ours. So intuitively, P?2 should be as close to 0 as possible on the support
of ?. Problem (4) has similar properties and the
R constraint on the vector of weights ? enforces
that, at an optimal solution, the contribution of (Q?? )2 d? to the overall sum in the criterion is the
same for all ?. Using Lemma 2 yields (up to a multiplicative constant) the polynomial Q?,d . Other
constraints on ? would yield different weighted sum of the squares P?2 . This will be a subject of
further investigations.
To sum up, Theorem 1 provides a rationale for our observations. Indeed when solving (4), intuitively,
Q?,d should be close to 0 on average while remaining in a class of nonvanishing SOS polynomials.
4.4
Christoffel function and outlier detection
The following result from [5, Theorem 3.5.6] draws a direct connection between Q?,d and the
Chritoffel function (the right hand side of (5)).
Theorem 2 ([5]) Let Assumption 1 hold and let z ? Rp be fixed, arbitrary. Then
Z
?1
2
Q?,d (z)
= min
P (x) d?(x) : P (z) = 1 .
P ?Rd [X]
(5)
Rp
Theorem 2 provides a mathematical rationale for the use of Q?,d for outlier or novelty detection
purposes. Indeed, from Lemma 2 and equation (3), we have Q?,d ? 1 on Rp . Furthermore,
the
solution of the minimization problem in (5) satisfies P (z)2 = 1 and ? x ? Rp : P (x)2 ? 1 ?
inequality). Hence, for high values of Q?,d (z), the sublevel set
1 ? Q?,d (z)?1 (by Markov?s
x ? Rp : P (x)2 ? 1 contains most of the mass of ? while P (z)2 = 1. An illustration of this
discussion is given in appendix E. Again the result of Theorem 2 does not seem to have been
interpreted for purposes similar to ours.
5
Experiments on network intrusion datasets
In addition to having its own mathematical interest, Theorem 1 can be exploited for various
purposes.
p+d
p
For instance, the sub-level sets of Q?,d , and in particular {x ? R : Q?,d (x) ? d }, can be used
to encode a cloud of points in a simple and compact form. However in this section we focus on
another potential application in anomaly detection.
Empirical findings described in Section 3 suggest that the polynomial Q?,d can be used to detect
outliers in a collection of real vectors (with ? the empirical average). This is backed up by the results
presented in Section 4. We illustrate these properties on a real world example. We choose the KDD
cup 99 network intrusion dataset [13] consisting of network connection data, labeled as normal traffic
or network intrusions. We follow [19] and [18] and construct five datasets consisting of labeled
vectors in R3 with the following properties
Dataset
Number of examples
Proportions of attacks
http
567498
0.004
smtp
95156
0.0003
ftp-data
30464
0.023
ftp
4091
0.077
others
5858
0.016
The details on the datasets construction are available in [19, 18] and reproduced in Appendix C.
The main idea is to compute an outlyingness score (independant of the label) and compare outliers
predicted by the score and network intrusion labels. The underlying assumption is that network
intrusions correspond to infrequent abnormal behaviors and could be considered as outliers.
We reproduce the same experiment as in [18, Section 5.4] using the value of Q?,d from Definition 1
as an outlyingness score (with d = 3). The authors of [18] have compared different methods in the
same experimental setting: robust estimation and Mahalanobis distance [8, 10], mixture models [14]
and recurrent neural networks. The results are gathered in [18, Figure 7]. In the left panel of Figure 2
we represent the same performance measure for our approach: we first compute the value of Q?,d
for each datapoint and use it as an outlyingness score. We then display the proportion of correctly
identified outliers, with score above a given threshold, as a function of the proportion of examples
with score above the threshold (for different values of the threshold). The main comments are as
follows.
7
d (AUPR)
dataset
0.8
http
0.6
smtp
ftp_data
0.4
ftp
2 (0.18)
3 (0.18)
0.50
4 (0.16)
5 (0.15)
0.25
others
0.2
1 (0.08)
0.75
Precision
% correctly identified outliers
1.00
1.0
6 (0.13)
0.0
0.00
0.0
0.2
0.4
0.6
0.8
1.0
0.0
% top outlyingness score
0.2
0.4
0.6
0.8
1.0
Recall
Figure 2: Left: reproduction of the results described in [18] with the evaluation of Q?,d as an
outlyingness score (d = 3). Right: precision-recall curves for different values of d (dataset ?others?).
? The inverse moment matrix SOS polynomial does detect network intrusions with varying performances on the five datasets.
? Except for the ?ftp-data dataset?, the global shape of these curves are very similar to results reported
in [18, Figure 7] indicating that the proposed approach is comparable to other dedicated methods for
intrusion detection in these four datasets.
In a second experiment, we investigate the effect of changing the value of d on the performances.
We focus on the ?others? dataset because it is the most heterogeneous. We adopt a slightly different
measure of performance and use precision recall (see for example [4]) to measure performances
in identifying network intrusions (the higher the curve, the better). We call the area under such
curves the AUPR. The right panel of Figure 2 represents these results. First, the case d = 1, which
corresponds to vanilla Mahalanobis distance as outlined in Section 3.3, gives poor performances.
Second, the global performances rapidly increase with d and then decrease and stabilize.
This suggests that d can be used as a tuning parameter to control the ?complexity? of Q?,d . Indeed,
2d is the degree of the polynomial Q?,d and it is expected that more complex models will identify
more diverse classes of examples as outliers. In our case, this means identifying regular traffic as
outliers while it actually does not correspond to intrusions. In general, a good heuristic regarding the
tuning of d is to investigate performances on a well specified task in a preliminary experiment.
6
Future work
An important question is the asymptotic regime when d ? ?. Current state of knowledge suggests
that, up to a correct scaling, the limit of the Christoffel functions (when known to exist) involves an
edge effect term, related to the support of the measure, and the density of ? with respect to Lebesgue
measure, see for example [2] for the Euclidean ball. It also suggests connections with the notion of
equilibrium measure in potential theory [17, 1, 7]. Generalization and interpretation of these results
in our context will be investigated in future work.
Even though good approximations are obtained with low degree (at least in dimension 2 or 3), the
approach involves the inversion of large ill conditioned Hankel matrices which reduces considerably
the applicability for higher degrees and dimensions. A promising research line is to develop approximation procedures and advanced optimization and algebra tools so that the approach could scale
computationally to higher dimensions and degrees.
Finally, we did not touch the question of statistical accuracy. In the context of empirical processes, this
will be very relevant to understand further potential applications in machine learning and reduce the
gap between the abstract orthogonal polynomial theory and practical machine learning applications.
Acknowledgments
This work was partly supported by project ERC-ADG TAMING 666981, ERC-Advanced Grant of
the European Research Council and grant number FA9550-15-1-0500 from the Air Force Office of
Scientific Research, Air Force Material Command.
8
References
[1] R. J. Berman (2009). Bergman kernels for weighted polynomials and weighted equilibrium
measures of Cn . Indiana University Mathematics Journal, 58(4):1921?1946.
[2] L. Bos, B. Della Vecchia and G. Mastroianni (1998). On the asymptotics of Christoffel functions
for centrally symmetric weights functions on the ball in Rn . Rendiconti del Circolo Matematico
di Palermo, 52:277?290.
[3] V. Chandola, A. Banerjee and V. Kumar (2009). Anomaly detection: A survey. ACM computing
surveys (CSUR) 41(3):15.
[4] J. Davis and M. Goadrich (2006). The relationship between Precision-Recall and ROC curves.
Proceedings of the 23rd international conference on Machine learning (pp. 233-240). ACM.
[5] C.F. Dunkl and Y. Xu (2001). Orthogonal polynomials of several variables. Cambridge
University Press. MR1827871.
[6] G.H Golub, P. Milanfar and J. Varah (1999). A stable numerical method for inverting shape
from moments. SIAM Journal on Scientific Computating 21(4):1222?1243 (1999).
[7] B. Gustafsson, M. Putinar, E. Saff and N. Stylianopoulos (2009). Bergman polynomials on an
archipelago: estimates, zeros and shape reconstruction. Advances in Mathematics 222(4):1405?
1460.
[8] A.S. Hadi (1994). A modification of a method for the detection of outliers in multivariate
samples. Journal of the Royal Statistical Society. Series B (Methodological), 56(2):393-396.
[9] J.W. Helton, J.B. Lasserre and M. Putinar (2008). Measures with zeros in the inverse of their
moment matrix. The Annals of Probability, 36(4):1453-1471.
[10] E.M. Knorr, R.T. Ng and R.H.Zamar (2001). Robust space transformations for distance-based
operations. Proceedings of the international conference on Knowledge discovery and data
mining (pp. 126-135). ACM.
[11] J.B. Lasserre (2015). Level Sets and NonGaussian Integrals of Positively Homogeneous
Functions. International Game Theory Review, 17(01):1540001.
[12] J.B. Lasserre and M.Putinar (2015). Algebraic-exponential Data Recovery from Moments.
Discrete & Computational Geometry, 54(4):993-1012.
[13] M. Lichman (2013). UCI Machine Learning Repository, http://archive.ics.uci.edu/ml
University of California, Irvine, School of Information and Computer Sciences.
[14] J.J. Oliver, R.A.Baxter and C.S. Wallace (1996). Unsupervised learning using MML. Proceedings of the International Conference on Machine Learning (pp. 364-372).
[15] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery (2007). Numerical Recipes:
The Art of Scientific. Computing (3rd Edition). Cambridge University Press.
[16] G. Szeg? (1974). Orthogonal polynomials. In Colloquium publications, AMS, (23), fourth
edition.
[17] V. Totik (2000). Asymptotics for Christoffel functions for general measures on the real line.
Journal d?Analyse Math?matique, 81(1):283-303.
[18] G. Williams, R. Baxter, H. He, S. Hawkins and L. Gu (2002). A Comparative Study of RNN
for Outlier Detection in Data Mining. IEEE International Conference on Data Mining (p. 709).
IEEE Computer Society.
[19] K. Yamanishi, J.I. Takeuchi, G. Williams and P. Milne (2004). On-line unsupervised outlier detection using finite mixtures with discounting learning algorithms. Data Mining and Knowledge
Discovery, 8(3):275-300.
9
| 6495 |@word repository:1 inversion:4 polynomial:80 proportion:3 nd:2 r:6 simulation:1 accounting:1 covariance:1 decomposition:1 independant:1 dishonest:1 ld:2 moment:31 contains:1 score:9 series:1 lichman:1 ours:2 existing:2 current:1 surprising:2 smtp:2 written:1 determinantal:1 numerical:4 additive:1 kdd:2 shape:17 plot:1 update:2 plane:2 fa9550:1 provides:4 math:1 attack:1 five:2 mathematical:9 direct:3 become:1 gustafsson:1 consists:1 paragraph:1 manner:2 aupr:2 expected:1 indeed:6 behavior:1 wallace:1 xti:1 spain:1 estimating:1 notation:5 moreover:1 panel:6 provided:1 mass:1 null:1 project:1 underlying:1 interpreted:2 pursue:1 q2:1 finding:3 transformation:2 indiana:1 every:1 multidimensional:1 tie:1 universit:2 rm:1 control:1 grant:2 omit:1 positive:7 understood:3 tends:3 sd:6 consequence:3 limit:1 subscript:1 solely:1 edouard:2 suggests:8 factorization:2 practical:2 woodbury:2 unique:5 enforces:1 developped:1 acknowledgment:1 block:1 definite:3 orthonormalization:1 procedure:1 asymptotics:2 area:2 empirical:19 rnn:1 word:1 fraud:1 regular:1 suggest:1 close:4 context:3 backed:1 williams:2 starting:1 minq:1 typicality:1 convex:3 survey:2 simplicity:1 identifying:2 recovery:1 m2:1 avoidance:1 orthonormal:2 goadrich:1 notion:1 resp:2 pt:1 target:1 construction:1 user:1 anomaly:2 infrequent:1 annals:1 us:1 homogeneous:1 bergman:2 associate:1 element:3 satisfying:1 labeled:2 observed:2 cloud:11 capture:6 ensures:1 connected:1 ordering:2 decrease:1 highest:1 intuition:1 pd:1 colloquium:1 complexity:1 depend:2 solving:1 algebra:2 basis:4 gu:1 joint:1 represented:1 various:2 describe:2 jean:1 whose:3 npd:7 quite:3 larger:2 heuristic:1 triangular:1 toulouse:4 rto:1 analyse:1 online:2 reproduced:2 sequence:1 reconstruction:2 propose:2 product:5 fr:2 remainder:1 relevant:1 uci:2 rapidly:1 intuitive:1 asserts:1 description:1 dirac:1 recipe:1 produce:1 comparative:1 yamanishi:1 ring:4 leave:1 ftp:4 illustrate:2 recurrent:2 develop:1 nearest:2 school:1 predicted:1 involves:3 berman:1 convention:1 correct:1 y01:2 material:1 explains:1 require:2 fix:3 generalization:3 investigation:2 preliminary:1 extension:1 hold:1 around:1 considered:3 ic:1 normal:1 exp:1 hawkins:1 equilibrium:2 mapping:3 adopt:1 purpose:6 uniqueness:1 estimation:8 label:2 council:1 tool:1 weighted:3 minimization:2 lexicographic:2 always:2 gaussian:2 putinar:3 pn:4 varying:1 command:1 office:1 publication:1 encode:2 derived:1 focus:3 ax:3 methodological:1 rank:1 likelihood:5 intrusion:13 contrast:1 summarizing:1 detect:3 sense:1 bos:1 am:1 cnrs:1 vetterling:1 relation:5 reproduce:3 france:2 overall:1 among:1 ill:2 denoted:1 exponent:1 qvd:1 art:1 spatial:1 equal:5 construct:1 having:1 ng:1 represents:1 unsupervised:3 future:2 report:2 np:18 others:4 few:1 recognize:1 geometry:2 consisting:2 lebesgue:1 n1:2 detection:14 interest:2 investigate:2 mining:4 pj2:1 evaluation:7 golub:1 mixture:3 semidefinite:2 behind:2 x22:4 ambient:1 oliver:1 integral:3 edge:1 orthogonal:11 indexed:1 euclidean:1 isolated:1 instance:2 measuring:1 assignment:1 cost:3 applicability:1 entry:1 monomials:11 graphic:1 reported:2 y02:3 considerably:1 combined:1 density:5 international:5 siam:1 discipline:1 invertible:1 together:1 nonvanishing:2 nongaussian:1 again:2 sublevel:6 choose:1 positivity:1 leading:2 account:2 potential:6 de:1 semidefiniteness:1 imt:2 stabilize:1 chandola:1 coefficient:11 satisfy:3 depends:4 performed:1 break:1 view:1 multiplicative:2 apparently:1 traffic:2 red:2 start:2 recover:1 complicated:2 contribution:1 square:11 air:2 accuracy:1 hadi:1 takeuchi:1 correspond:4 identify:2 yield:2 gathered:1 generalize:2 accurately:2 worth:1 datapoint:1 definition:2 pp:5 obvious:1 associated:3 proof:3 di:1 sampled:2 irvine:1 dataset:8 recall:5 knowledge:5 color:2 x13:1 actually:2 appears:1 higher:7 follow:1 formulation:3 done:2 though:1 furthermore:6 hand:4 sketch:1 touch:1 nonlinear:1 banerjee:1 del:1 defines:1 brings:1 scientific:3 effect:2 csur:1 hence:2 discounting:1 y12:4 symmetric:1 nonzero:1 helton:1 illustrated:4 mahalanobis:3 game:1 davis:1 criterion:1 antagonist:1 dedicated:3 interpreting:1 image:1 variational:1 empirically:1 belong:1 interpretation:3 he:1 elementwise:1 interpret:1 refer:1 cup:2 cambridge:2 tuning:2 rd:15 vanilla:1 outlined:2 mathematics:3 hp:10 populated:1 similarly:1 erc:2 aq:1 stable:2 circolo:1 multivariate:3 own:2 perspective:1 moderate:2 belongs:2 certain:1 inequality:1 prism:1 postponed:1 exploited:2 additional:5 novelty:2 reduces:1 christoffel:8 y22:3 manipulate:1 n24:1 qi:1 y21:4 varah:1 heterogeneous:1 kernel:3 sometimes:1 represent:3 addition:2 remarkably:1 crucial:1 irit:2 archive:1 probably:1 comment:2 subject:1 tend:1 schur:1 seem:2 call:2 baxter:2 variate:6 identified:2 inner:4 pauwels:2 idea:4 regarding:1 reduce:1 cn:1 expression:3 milanfar:1 algebraic:2 action:2 remark:1 proportionally:1 listed:1 clear:1 detailed:1 amount:1 nonparametric:1 induces:1 http:3 exist:2 canonical:4 revisit:1 estimated:1 correctly:2 diverse:2 write:2 discrete:1 four:1 threshold:3 clarity:1 changing:1 pj:1 y10:2 sum:12 enforced:1 inverse:17 noticing:1 fourth:1 fraudulent:1 striking:1 hankel:2 extends:1 throughout:2 reasonable:1 reader:1 family:5 draw:1 y03:2 appendix:7 scaling:3 comparable:2 capturing:1 abnormal:1 guaranteed:1 centrally:1 display:1 quadratic:1 nonnegative:1 constraint:5 x2:12 sake:2 aspect:1 min:2 kumar:1 x12:4 relatively:1 structured:1 according:2 combination:2 poor:1 ball:2 smaller:3 slightly:1 modification:1 y13:2 outlier:16 invariant:1 intuitively:2 heart:1 taken:1 y31:2 superlevel:1 equation:2 previously:1 remains:1 computationally:1 r3:1 mml:1 letting:1 available:2 gaussians:1 operation:5 x23:1 away:1 appearing:1 distinguished:9 r2m:1 alternative:1 schmidt:1 rp:27 existence:1 denotes:4 clustering:1 unnoticed:4 remaining:1 top:1 palermo:1 archipelago:1 extremality:2 build:1 graded:1 classical:1 society:2 objective:2 question:2 concentration:1 dependence:4 md:17 dp:1 distance:5 vd:9 trivial:1 toward:1 assuming:1 relationship:1 illustration:2 minimizing:1 equivalently:2 difficult:1 unfortunately:1 mostly:1 potentially:2 y11:4 incidental:1 perform:1 observation:13 markov:1 datasets:5 finite:4 defining:1 situation:2 precise:1 rn:1 arbitrary:2 community:1 complement:1 inverting:1 specified:1 connection:9 california:1 milne:1 barcelona:1 nip:1 matematico:1 usually:1 regime:1 including:1 royal:1 explanation:1 zamar:1 critical:1 difficulty:1 natural:1 force:2 advanced:2 inversely:1 saff:1 text:1 taming:1 literature:4 understanding:1 geometric:1 discovery:2 review:1 asymptotic:1 highlight:1 rationale:5 interesting:1 proportional:1 remarkable:1 degree:26 affine:5 xp:4 minp:1 dd:4 balancing:1 row:1 rendiconti:1 gl:15 supported:2 monomial:1 side:1 understand:1 neighbor:2 fall:1 curve:5 boundary:2 dimension:9 gram:2 evaluating:2 xn:2 axi:1 world:1 author:1 collection:6 made:2 far:1 compact:1 confirm:1 deg:3 global:6 ml:1 xi:6 lasserre:5 promising:1 robust:3 investigated:1 complex:2 european:1 knorr:1 did:1 main:4 linearly:1 paul:1 edition:2 n2:1 fair:1 measurep:1 x1:15 xu:1 positively:1 borel:5 adg:1 fashion:1 roc:1 n:1 precision:4 sub:1 exponential:2 vanish:1 theorem:14 remained:2 formula:1 specific:2 r2:5 svm:1 reproduction:1 intrinsic:1 exists:2 conditioned:2 push:1 sorting:1 gap:1 suited:2 flannery:1 simply:2 univariate:2 ordered:5 corresponds:5 satisfies:7 acm:3 outlyingness:7 teukolsky:1 sorted:1 identity:1 y20:3 determined:1 except:1 uniformly:1 szeg:1 lemma:7 called:1 bernard:1 pas:1 invariance:3 experimental:1 partly:1 indicating:1 support:3 cholesky:2 computating:1 relevance:1 phenomenon:5 della:1 |
6,075 | 6,496 | Sublinear Time Orthogonal Tensor Decomposition?
Zhao Song?
David P. Woodruff?
Huan Zhang?
Dept. of Computer Science, University of Texas, Austin, USA
?
IBM Almaden Research Center, San Jose, USA
?
Dept. of Electrical and Computer Engineering, University of California, Davis, USA
zhaos@utexas.edu, dpwoodru@us.ibm.com, ecezhang@ucdavis.edu
?
Abstract
A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms
for orthogonal tensor decomposition with provable guarantees. Their algorithm
is based on computing sketches of the input tensor, which requires reading the
entire input. We show in a number of cases one can achieve the same theoretical
guarantees in sublinear time, i.e., even without reading most of the input tensor.
Instead of using sketches to estimate inner products in tensor decomposition
algorithms, we use importance sampling. To achieve sublinear time, we need
to know the norms of tensor slices, and we P
show how to do this in a number of
k
important cases. For symmetric tensors T = i=1 ?i u?p
i with ?i > 0 for all i, we
estimate such norms in sublinear time whenever p is even. For the important case
of p = 3 and small values of k, we can also estimate such norms. For asymmetric
tensors sublinear time is not possible in general, but we show if the tensor slice
norms are just slightly below k T kF then sublinear time is again possible. One of
the main strengths of our work is empirical - in a number of cases our algorithm is
orders of magnitude faster than existing methods with the same accuracy.
1
Introduction
Tensors are a powerful tool for dealing with multi-modal and multi-relational data. In recommendation
systems, often using more than two attributes can lead to better recommendations. This could occur,
for example, in Groupon where one could look at users, activities, and time (season, time of day,
weekday/weekend, etc.), as three attributes to base predictions on (see [13] for a discussion). Similar
to low rank matrix approximation, we seek a tensor decomposition to succinctly store the tensor and
to apply it quickly. A popular decomposition method is the canonical polyadic decomposition, i.e.,
the CANDECOMP/PARAFAC (CP) decomposition, where the tensor is decomposed into a sum of
rank-1 components [9]. We refer the reader to [23], where applications of CP including data mining,
computational neuroscience, and statistical learning for latent variable models are mentioned.
A natural question, given the emergence of large data sets, is whether such decompositions can be
performed quickly. There are a number of works on this topic [17, 16, 7, 11, 10, 4, 20]. Most related
to ours are several recent works of Wang et al. [23] and Tung et al. [18], in which it is shown how to
significantly speed up this decomposition for orthogonal tensor decomposition using the randomized
technique of linear sketching [15]. In this work we also focus on orthogonal tensor decomposition.
The idea in [23] is to create a succinct sketch of the input tensor, from which one can then perform
implicit tensor decomposition by approximating inner products in existing decomposition methods.
Existing methods, like the power method, involve computing the inner product of a vector, which is
now a rank-1 matrix, with another vector, which is now a slice of a tensor. Such inner products can
?
?
Full version appears on arXiv, 2017. ?Work done while visiting IBM Almaden.
Supported by XDATA DARPA Air Force Research Laboratory contract FA8750-12-C-0323.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
be approximated much faster by instead computing the inner product of the sketched vectors, which
have significantly lower dimension. One can also replace the sketching with sampling to approximate
inner products; we discuss some sampling schemes [17, 4] below and compare them to our work.
1.1 Our Contributions
We show in a number of important cases, one can achieve the same theoretical guarantees in the
work of Wang et al. [23] (which was applied later by Tung et al. [18]), in sublinear time, that is,
without reading most of the input tensor. While previous work needs to walk through the input at
least once to create a sketch, we show one can instead perform importance sampling of the tensor
based on the current iterate, together with reading a few entries of the tensor which help us learn the
norms of tensor slices. We use a version of `2 -sampling for our importance sampling. One source of
speedup in our work and in Wang et al. [23] comes from approximating inner products in iterations
in the robust tensor power method (see below). To estimate hu, vi for n-dimensional vectors u and v,
their work computes sketches S(u) and S(v) and approximates hu, vi ? hS(u), S(v)i. Instead, if
one has u, one can sample coordinates i proportional to u2i , which is known as `2 -sampling [14, 8].
v kuk2
One estimates hu, vi as i ui 2 , which is unbiased and has variance O(kuk22 kvk22 ). These guarantees
are similar to those using sketching, though the constants are significantly smaller (see below), and
unlike sketching, one does not need to read the entire tensor to perform such sampling.
Symmetric Tensors: As in [23], we focus on orthogonal tensor decomposition of symmetric tensors,
though we explain the extension to the asymmetric case below. Symmetric tensors arise in engineering
applications, for example, to represent the symmetric tensor field of stress, strain, and anisotropic
conductivity. Another example is diffusion MRI in which one uses symmetric tensors to describe
diffusion in the brain or other parts of the body. In spectral methods symmetric tensors are exactly
those that come up in Latent Dirichlet Allocation problems. Although one can symmetrize a tensor
using simple matrix operations (see, e.g., [1]), we cannot do this in sublinear time.
In orthogonal tensor decompostion of a symmetric matrix, there is an underlying n ? n ? ? ? n tensor
Pk
?p
?
T? =
i=1 ?i vi , and the input tensor is T = T + E, where k E k2 ? . We have ?1 >
k
?2 > ? ? ? > ?k > 0 and that {vi }i=1 is a set of orthonormal vectors. The goal is to reconstruct
? i to the ?i . Our results naturally generalize
approximations v?i to the vectors vi , and approximations ?
to tensors with different lengths in different dimensions. For simplicity, we first focus on order p = 3.
In the robust tensor power method [1], one generates a random initial vector u, and performs T
update steps u
? = T(I, u, u)/k T(I, u, u)k2 , where
n X
n
n X
n
n X
n
hX
i
X
X
T(I, u, u) =
T1,j,` uj u` ,
T2,j,` uj u` , ? ? ? ,
Tn,j,` uj u` .
j=1 `=1
j=1 `=1
j=1 `=1
The matrices T1,?,? , . . . , Tn,?,? are referred to as the slices. The vector u
? typically converges to the
top eigenvector in a small number of iterations, and one often chooses a small number L of random
initial vectors to boost confidence. Successive eigenvectors can be found by deflation. The algorithm
and analysis immediately extend to higher order tensors.
We use `2 -sampling to estimate T(I, u, u). To achieve the same guarantees as in [23], for typical
settings of parameters (constant k and several eigenvalue assumptions) na?vely one needs to take
O(n2 ) `2 -samples from u for each slice in each iteration, resulting in ?(n3 ) time and destroying our
sublinearity. We observe that if we additionally knew the squared norms k T1,?,? k2F , . . . , k Tn,?,? k2F ,
kT
k2
F
then we could take O(n2 ) `2 -samples in total, where we take ki,?,?
? O(n2 ) `2 -samples from the
T k2F
i-th slice in expectation. Perhaps in some applications such norms are known or cheap to compute in
a single pass, but without further assumptions, how can one obtain such norms in sublinear time?
Pk
3
If T is a symmetric tensor, then Tj,j,j = i=1 ?i vi,j
+ Ej,j,j . Note that if there were no noise,
Pk
2
then we could read off approximations to the slice norms, since k Tj,?,? k2F = i=1 ?2i vi,j
, and so
2/3
Tj,j,j is an approximation to k Tj,?,? k2F up to factors depending on k and the eigenvalues. However,
there is indeed noise. To obtain non-trivial guarantees, the robust tensor power method assumes
k E k2 = O(1/n), where
k E k2 =
sup
kuk2 =kvk2 =kwk2 =1
E(u, v, w) =
sup
n X
n X
n
X
kuk2 =kvk2 =kwk2 =1 i=1 j=1 k=1
2
Ei,j,k ui vj wk ,
?
which in particular implies | Ej,j,j | = O(1/n). This assumption comes from the ?(1/ n)correlation of the random initial vector to v1 . This noise bound does not trivialize the problem;
indeed, Ej,j,j can be chosen adversarially subject to | Ej,j,j | = O(1/n), and if the vi were random
Pk
3
= O(1/n3/2 ), which is small enough
unit vectors and the ?i and k were constant, then i=1 ?i vi,j
to be completely masked by the noise Ej,j,j . Nevertheless, there is a lot of information about the
3
slice norms. Indeed, suppose k = 1, ?1 = ?(1), and k T kF = 1. Then Tj,j,j = ?(v1,j
) + Ej,j,j ,
2
2 2
and one can show k Tj,?,? kF = ?1 v1,j ? O(1/n). Again using that | Ej,j,j | = O(1/n), this implies k Tj,?,? k2F = ?(n?2/3 ) if and only if Tj,j,j = ?(1/n), and therefore one would notice this
by reading Tj,j,j . There can only be o(n2/3 ) slices j for which k Tj,?,? k2F = ?(n?2/3 ), since
k T k2F = 1. Therefore, for each of them we can afford to take O(n2 ) `2 -samples and still have an
O(n2+2/3 ) = o(n3 ) sublinear running time. The remaining slices all have k Tj,?,? k2F = O(n?2/3 ),
and therefore if we also take O(n1/3 ) `2 -samples from every slice, we will also estimate the contribution to T(I, u, u) from these slices well. This is also a sublinear O(n2+1/3 ) number of samples.
While the previous paragraph illustrates the idea for k = 1, for k = 2 we need to read more than the
Tj,j,j entries to decide how many `2 -samples to take from a slice. The analysis is more complicated
3
3
because of sign cancellations. Even for k = 2 we could have Tj,j,j = ?1 v1,j
+ ?2 v2,j
+ Ej,j,j ,
2
and if v1,j = ?v2,j then we may not detect that k Tj,?,? kF is large. We fix this by also reading the
entries Ti,j,j , Tj,i,j , and Tj,j,i for every i and j. This is still only O(n2 ) entries and so we are still
sublinear time. Without additional assumptions, we only give a formal analysis of this for k ? {1, 2}.
More importantly, if instead of third-order symmetric tensors we consider p-th order symmetric
tensors for even p, we do not have such sign cancellations. In this case we do not have any restrictions
on k for estimating slice norms. One does need to show after deflation, the slice norms can still be
estimated; this holds because the eigenvectors and eigenvalues are estimated sufficiently well.
We also give several per-iteration optimizations of our algorithm, based on careful implementations
of generating a sorted list of random numbers and random permutations. We find empirically (see
below) that we are much faster per iteration than previous sketching algorithms, in addition to not
having to read the entire input tensor in a preprocessing step.
Pk
Asymmetric Tensors: For asymmetric tensors, e.g., 3rd-order tensors of the form i=1 ?i ui ? vi ?
wi , it is impossible to achieve sublinear time in general, since it is hard to distinguish T = ei ?ej ?ek
for random i, j, k ? {1, 2, . . . , n} from T = 0?3 . We make a necessary and sufficient assumption
that all the entries of the ui are less than n?? for an arbitrarily small constant ? > 0. In this case, all
slice norms are o(n?? ) and by taking O(n2?? ) samples from each slice we achieve sublinear time.
We can also apply such an assumption to symmetric tensors.
Empirical Results: One of the main strengths of our work is our empirical results. In each iteration
we approximate T(I, u, u) a total of B times independently and take the median to increase our
confidence. In the notation of [23], B corresponds to the number of independent sketches used.
While the median works empirically, there are some theoretical issues with it discussed in Remark 4.
Also let b be the total number of `2 -samples we take per iteration, which corresponds to the sketch
size in the notation of [23]. We found that empirically we can set B and b to be much smaller than
that in [23] and achieve the same error guarantees. One explanation for this is that the variance bound
we obtain via importance sampling is a factor of 43 = 64 smaller than in [23], and for p-th order
tensors, a factor of 4p smaller.
To give an idea of how much smaller we can set b and B, to achieve roughly the same squared residual
norm error on the synthetic data sets of dimension 1200 for finding a good rank-1 approximation,
the algorithm of [23] would need to set parameters b = 216 and B = 50, whereas we can set
b = 10 ? 1200 and B = 5. Our running time is 2.595 seconds and we have no preprocessing time,
whereas the algorithm of [23] has a running time of 116.3 seconds and 55.34 seconds of preprocessing
time. We refer the reader to Table 1 in Section 3. In total we are over 50 times faster.
We also demonstrate our algorithm in a real-world application using real datasets, even when the
datasets are sparse. Namely, we consider a spectral algorithm for Latent Dirichlet Allocation [1, 2]
which uses tensor decomposition as its core computational step. We show a significant speedup can
be achieved on tensors occurring in applications such as LDA, and we refer the reader to Table 2 in
3
Section 3. For example, on the wiki [23] dataset with a tensor dimension of 200, we run more than 5
times faster than the sketching-based method.
Previous Sampling Algorithms: Previous sampling-based schemes of [17, 4] do not achieve our
guarantees, because [17] uses uniform sampling, which does not work for tensors with spiky elements,
while the non-uniform sampling in [4] requires touching all of the entries in the tensor and making
two passes over it.
Notation Let [n] denote {1, 2, . . . , n}. Let ? denote the outer product, and u?3 = u ?
p
u ? u. Let T ? Rn , where p is the order of tensor T and n is the dimension of ptensor
T. Let hA,
denote
inner product between two tensors A, B ? Rn , e.g.,
PBi
Pn the entry-wise
Pn
n
np
hA, Bi =
i1 =1
i2 =1 ? ? ?
ip =1 Ai1 ,i2 ,??? ,ip ? Bi1 ,i2 ,??? ,ip . For a tensor A ? R , k A kF =
Pn Pn
Pn
1
( i1 =1 i2 =1 ? ? ? ip =1 A2i1 ,??? ,ip ) 2 . For random variable X let E[X] denote its expectation of X
and V[X] its variance (if these quantities exist).
2
Main Results
We explain the details of our main results in this section. First, we state the importance sampling
lemmas for our tensor application. Second, we explain how to quickly produce a list of random
tuples according to a certain distribution needed by our algorithm. Third, we combine the first and
the second parts to get a fast way of approximating tensor contractions, which are used as subroutines
in each iteration of the robust tensor power method. We then provide our main theoretical results, and
how to estimate the slice norms needed by our main algorithm.
Importance sampling lemmas. Approximating an inner product is a simple application of importance sampling. Tensor contraction T(u, v, w) can be regarded as the inner product between two
n3 -dimensional vectors, and thus importance sampling can be applied. Lemma 1 suggests that we can
take a few samples according to their importance, e.g., we can sample Ti,j,k ui vj wk with probability
|ui vj wk |2 /kuk22 kvk22 kwk22 .P
AsP
long
Pas the number of samples is large enough, it will approximate
the true tensor contraction i j k Ti,j,k ui vj wk with small variance after a final rescaling.
Lemma 1. Suppose random variable X = Ti,j,k ui vj wk /(pi qj rk ) with probability pi qj rk where
pi = |ui |2 /kuk22 , qj = |vj |2 /kvk22 , and rk = |wk |2 /kwk22 , and we take L i.i.d. samples of X,
PL
denoted X1 , X2 , ? ? ? , XL . Let Y = L1 `=1 X` . Then (1) E[Y ] = hT, u ? v ? wi, and (2)
V[Y ] ? L1 k T k2F ? ku ? v ? wk2F .
Similarly, we also have importance sampling for each slice Ti,?,? , i.e., ?face? of T.
Lemma 2. For all i ? [n], suppose random variable X i = Ti,j,k vj wk /(qj rk ) with probability
qj rk , where qj = |vj |2 /kvk22 and rk = |wk |2 /kwk22 , and we take Li i.i.d. samples of X i , say
PL
X1i , X2i , ? ? ? , XLi i . Let Y i = L1i `=1 X`i . Then (1) E[Y i ] = hTi,?,? , v ? wi and (2) V[Y i ] ?
1
2
2
Li k Ti,?,? kF kv ? wkF .
Generating importance samples in linear time. We need an efficient way to sample indices of a
vector based on their importance. We view this problem as follows: imagine [0, 1] is divided into z
?bins? with different lengths corresponding to the probability of selecting each bin, where z is the
number of indices in a probability vector. We generate m random numbers uniformly from [0, 1] and
see which bin each random number belongs to. If a random number is in bin i, we sample the i-th
index of a vector. There are known algorithms [6, 19] to solve this problem in O(z + m) time.
We give an alternative algorithm G EN R AND T UPLES. Our algorithm combines Bentley and Saxe?s
algorithm [3] for efficiently generating m sorted random numbers in O(m) time, and Knuth?s
shuffling algorithm [12] for generating a random permutation of [m] in O(m) time. We use the
notation C UM P ROB(v, w) and C UM P ROB(u, v, w) for the algorithm creating the distributions on
2
3
Rn and Rn of Lemma 2 and Lemma 1, respectively. We note that na?vely applying previous
algorithms would require z = O(n2 ) and z = O(n3 ) time to form these two distributions, but we
can take O(m) samples from them implicitly in O(n + m) time.
Fast approximate tensor contractions. We propose a fast way to approximately compute tensor
contractions T(I, v, w) and T(u, v, w) with a sublinear number of samples of T, as shown in
Alogrithm 1 and Algorithm 2. Na?vely computing tensor contractions using all of the entries of T
gives an exact answer but could take n3 time. Also, to keep our algorithm sublinear time, we never
explicitly compute the deflated tensor; rather we represent it implicitly and sample from it.
4
Algorithm 1 Subroutine for approximate tensor Algorithm 2 Subroutine for approximate tensor
contraction T(I, v, w)
contraction T(u, v, w)
1: function A PPROX TI VW(T, v, w, n, B, {b
bi })
2: qe, re ? C UM P ROB(v, w)
3: for d = 1 ? B do
Pn
4:
L ? G EN R AND T UPLES( i=1 bbi , qe, re)
5:
for i = 1 ? n do
(d)
6:
si ? 0
for ` = 1 ? bbi do
(j, k) ? L(i?1)b+`
7:
8:
(d)
si
9:
(d)
b v, w)i ?
10: T(I,
k
8:
s(d) ? s(d) /bb
b
9: T(u, v, w) ? median s(d)
1
qj rk Ti,j,k ?uj ? uk
(d)
median si /bbi , ?i ? [n]
d?[B]
? si
1: function A PPROX T UVW(T, u, v, w, n, B, b
b)
2: pe, qe, re ? C UM P ROB(u, v, w)
3: for d = 1 ? B do
4:
L ? G EN R AND T UPLES(bb, pe, qe, re).
5:
s(d) ? 0
6:
for (i, j, k) ? L do
7:
s(d) ? s(d) + pi q1j r Ti,j,k ?ui ? uj ? uk
+
d?[B]
b
10: return T(u,
v, w)
b v, w)
11: return T(I,
The following theorem gives the error bounds of A PPROX TI VW and A PPROX T UVW (in Algorithm 1
and 2). Let bbi be the number samples we take from slice i ? [n] in A PPROX TI VW, and let bb denote
the total number of samples in our algorithm.
Theorem 3. For T ? Rn?n?n and u ? Rn with kuk2 = 1, define the number ?1,T (u) =
b
b u, u) ? T(I, u, u). For any b > 0, if
T(u,
u, u) ? T(u, u, u) and the vector ?2,T (u) = T(I,
bbi & bk Ti,?,? k2 /k T k2 then the following bounds hold 1 :
F
F
E[|?1,T (u)|2 ] = O(k T k2F /b), and E[k?2,T (u)k22 ] = O(nk T k2F /b).
In addition, for any fixed ? ? Rn with k?k2 = 1,
E[h?, ?2,T (u)i2 ] = O(k T k2F /b).
(1)
Eq. (1) can be obtained by observing that each random variable [?2,T (u)]i is independent and so
Pn
Pn
kT
k2F
k T k2
k T k2
. ( i=1 ?i2 ) b F = b F .
V[h?, ?2,T (u)i] = i=1 ?i2 i,?,?
b
bi
Remark 4. In [23], the coordinate-wise median of B estimates to the T(I, v, w) is used to boost
the success probability. There appears to be a gap [21] in their argument as it is unclear how to
achieve (1) after taking a coordinate-wise median, which is (7) in Theorem 1 of [23]. To fix this, we
instead pay a factor proportional to the number of iterations in Algorithm 3 in the sample complexity
bb. Since we have expectation bounds on the quantities in Theorem 3, we can apply a Markov bound
and a union bound across all iterations. This suffices for our main theorem concerning sublinear time
below. One can obtain high probability bounds by running Algorithm 3 multiple times independently,
and taking coordinate-wise medians of the output eigenvectors. Empirically, our algorithm works
even if we take the median in each iteration, which is done in line 10 in Algorithm 1.
Replacing Theorem 1 in [23] by our Theorem 3, the rest of the analysis in [23] is unchanged. Our
Algorithm 3 is the same as the sketching-based robust tensor power method in [23], except for lines
10, 12, 15, and 17, where the sketching-based approximate tensor contraction is replaced by our
importance sampling procedures A PPROX T UVW and A PPROX TI VW. Rather than use Theorem 2 of
Wang et al. [23], the main theorem concerning the correctness of the robust tensor decomposition
algorithm, we use a recent improvement of it by Wang and Anandkumar in Theorems 4.1 and 4.2
of [22], which states general guarantees for any algorithm satisfying per iteration noise guarantees.
These theorems also remove many of the earlier eigenvalue assumptions in Theorem 2 of [23].
Pk
Theorem 5. (Theorem 4.1 and 4.2 of [22]), Suppose T = T? + E, where T = i=1 ?i vi?3 with
n
?i > 0 and orthonormal basis vectors {v1 , . . . , vk } ? R , n ? k. Let ?max , ?min be the largest and
bi , vbi }k be outputs of the robust tensor power method. There exist
smallest values in {?i }ki=1 and {?
i=1
absolute constants K0 , C0 , C1 , C2 , C3 > 0 such that if E satisfies
?
(? )
(? )
(? )
(? )
k E(I, ut , ut )k2 ? , | E(vi , ut , ut )| ? min{/ k, C0 ?min /n},
(2)
1
For two functions f, g, we use the shorthand f . g (resp. &) to indicate that f ? Cg (resp. ?) for some
absolute constant C.
5
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
si /k T k2F , ?i
bbi ? b ?
? [n]
else
bbi ? b/n, ?i ? [n]
bb = Pn bbi
i=1
for ` = 1 ? L do
u(`) ?I NITIALIZATION
for t = 1 ? T do
u(`) ? A PPROX TI VW(T, u(`) , u(`) , n, B, {bbi })
u(`) ? u(`) /ku(`) k2
?(`) ? A PPROX T UVW(T, u(`) , u(`) , u(`) , n, B, b
b)
?
?
` ? arg max`?[L] ?(`) , u? ? u(` )
for t = 1 ? T do
u? ? A PPROX TI VW(T, u? , u? , n, B, {bbi })
u? ? u? /ku? k2
?? ? A PPROX T UVW(T, u? , u? , u? , n, B, bb)
return ?? , u?
100
Sketching
Sampling without pre-scanning
Sampling with pre-scanning
80
60
40
20
0
200
400
600
800
tensor dimension n
1000
1200
(a) Sketching v.s. importance sampling
80
Preprocessing time (seconds)
1: function I MPORTANCE S AMPLING RB(T, n, B, b)
2: if si are known, where k Ti,?,? k2F . si then
Running time (seconds)
Algorithm 3 Our main algorithm
60
Sketching
Sampling without pre-scanning
Sampling with pre-scanning
40
20
0
200
400
600
800
tensor dimension n
1000
1200
(b) Preprocessing time
Figure 1: Running time with growing dimension
for all i ? [k], t ? [T ], and ? ? [L] and furthermore
?
? C1 ? ?min / k, T = ?(log(?max n/)), L ? max{K0 , k} log(max{K0 , k}),
then with probability at least 9/10, there exists a permutation ? : [k] ? [k] such that
b?(i) | ? C2 , kvi ? vb?(i) k2 ? C3 /?i , ?i = 1, ? ? ? , k.
|?i ? ?
Combining the previous theorem with our importance sampling analysis, we obtain:
Theorem 6 (Main). Assume the notation of Theorem 5. For each j ? [k], suppose we take bb(j) =
Pn b(j)
bj and vbj , the number of samples
samples during the power iterations for recovering ?
i=1 bi
P
P
(j)
j?1
j?1
?3
2
bl vb ]i,?,? k /k T ?
b b?3 k2 where b & nk T k2 /2 +
& bkT k[T ? l=1 ?
for slice i is bbi ?
F
F
F
l=1 ?l v
l
l
k T k2F / min{/ k, ?min /n}2 . Then the output guarantees of Theorem 5 hold for Algorithm 3 with
constant probability. Our total time is O(LT k 2bb) and the space is O(nk), where bb = maxj?[k] bb(j) .
In Theorem 3, if we require bbi = bk Ti,?,? k2F /k T k2F , we need to scan the entire tensor to compute
k Ti,?,? k2F , making our algorithm not sublinear. With the following mild assumption in Theorem 7,
our algorithm is sublinear when sampling uniformly (bbi = b/n) without computing k Ti,?,? k2F :
Theorem 7 (Bounded slice norm). There is a constant ? > 0, a constant ? ? (0, 1] and a sufficiently
3
small constant ? > 0, such that, for any 3rd order tensor T = T? + E ? Rn with rank(T? ) ? n? ,
1
?k ? 1/n? , if k Ti,?,? k2F ? n? k T k2F for all i ? [n], and E satisfies (2), then Algorithm 3 runs in
O(n3?? ) time.
The condition ? ? (0, 1] is a practical one. When ? = 1, all tensor slices have equal Frobenius
norm. The case ? = 0 only occurs when k Ti,?,? kF = k T kF ; i.e., all except one slice is zero. This
theorem can also be applied to asymmetric tensors, since the analysis in [23] can be extended to them.
For certain cases, we can remove the bounded slice norm assumption. The idea is to take a sublinear
number of samples from the tensor to obtain upper bounds on all slice norms. In the full version,
we extend the algorithm and analysis of the robust tensor power method to p > 3 by replacing
contractions T(u, v, w) and T(I, v, w) with T(u1 , u2 , ? ? ? , up ) and T(I, u2 , ? ? ? , up ). As outlined
in Section 1, when p is even, because we do not have sign cancellations we can show:
Theorem 8 (Even order). There is a constant ? > 0 andpa sufficiently small constant ? > 0,
such that, for any even order-p tensor T = T? + E ? Rn with rank(T? ) ? n? , p ? n? and
?k ? 1/n? . For any sufficiently large constant c0 , there exists a sufficiently small constant c > 0, for
?
any ? (0, c?k /(c0 p2 kn(p?2)/2 )) if E satisfies k E k2 ? /(c0 n), Algorithm 3 runs in O(np?? )
time.
6
As outlined in Section 1, for p = 3 and small k we can take sign considerations into account:
Theorem 9 (Low rank). There is a constant ? > 0 and a sufficiently small constant ? > 0 such that
3
for any symmetric tensor T = T? + E ? Rn with E satisfying (2), rank(T? ) ? 2, and ?k ? 1/n? ,
then Algorithm 3 runs in O(n3?? ) time.
3
Experiments
3.1 Experiment Setup and Datasets
Our implementation shares the same code base 1 as the sketching-based robust tensor power method
proposed in [23]. We ran our experiments on an i7-5820k CPU with 64 GB of memory in singlethreaded mode. We ran two versions of our algorithm: the version with pre-scanning scans the full
tensor to accurately measure per-slice Frobenius norms and make samples for each slice in proportion
to its Frobenius norm in A PPROX TI VW; the version without pre-scanning assumes that the Frobenius
norm of each slice is bounded by n1? k T k2F , ? ? (0, 1] and uses b/n samples per slice, where b is
the total number of samples our algorithm makes, analogous to the sketch length b in [23].
Synthetic datasets. We first generated an orthonormal basis {vi }ki=1 and then computed the synthetic
Pk
tensor as T? = i=1 ?i vi?3 , with ?1 ? ? ? ? ? ?k . Then we normalized T? such that k T? kF = 1,
and added a symmetric Gaussian noise tensor E where Eijl ? N (0, n?1.5 ) for i ? j ? l. Then
? controls the noise-to-signal ratio and we kept it as 0.01 in all our synthetic tensors. For the
eigenvalues ?i , we generated three different decays: inverse decay ?i = 1i , inverse square decay
?i = i12 , and linear decay ?i = 1 ? i?1
k . We also set k = 100 when generating tensors, since higher
rank eigenvalues were almost indistinguishable from the added noise. To show the scalability of our
algorithm, we generated tensors with different dimensions: n = 200, 400, 600, 800, 1000, 1200.
Real-life datasets. Latent Dirichlet Allocation [5] (LDA) is a powerful generative statistical model
for topic modeling. A spectral method has been proposed to solve LDA models [1, 2] and the most
critical step in spectral LDA is to decompose a symmetric K ? K ? K tensor with orthogonal
eigenvectors, where K is the number of modeled topics. We followed the steps in [1, 18] and built
a K ? K ? K tensor TLDA for each dataset, and then ran our algorithms directly on TLDA to see
how it works on those tensors in real applications. In our experiments we keep K = 200. We used
the two same datasets as the previous work [23]: Wiki and Enron, as well as four additional real-life
datasets. We refer the reader to our GitHub repository 2 for our code and full results.
3.2 Results
We considered running time and the squared residual norm to evaluate the performance of our
Pk
3
algorithms. Given a tensor T ? Rn , let k T ? i=1 ?i ui ? vi ? wi k2F denote the squared residual
norm where {(?1 , u1 , v1 , w1 ), ? ? ? , (?k , uk , vk , wk )} are the eigenvalue/eigenvectors obtained by the
robust power method. To reduce the experiment time we looked only for the first eigenvalue and
eigenvector, but our algorithm is capable of finding any number of eigenvalues/eigenvectors. We list
the pre-scanning time as preprocessing time in tables. It only depends on the tensor dimension n and
unlike the sketching based method, it does not depend on b. Pre-scanning time is very short, because
it only requires one pass of sequential access to the tensor which is very efficient on hardware.
Sublinear time verification. Our theoretical result suggests the total number of samples bno-prescan
for our algorithm without pre-scanning is n1?? (? ? (0, 1]) times larger than bprescan for our algorithm
with pre-scanning. But in experiments we observe that when bno-prescan = bprescan both algorithms
achieve very similar accuracy, indicating that in practice ? ? 1.
Synthetic datasets. We ran our algorithm on a large number of synthetic tensors with different
dimensions and different eigengaps. Table 1 shows results for a tensor with 1200 dimensions with
100 non-zero eigenvalues decaying as ?i = i12 . To reach roughly the same residual norm, the running
time of our algorithm is over 50 times faster than that of the sketching-based robust tensor power
method, thanks to the fact that we usually need a relatively small B and b to get a good residual, and
the hidden constant factor in the running time of sampling is much smaller than that of sketching.
Our algorithm scales well on large tensors due to its sub-linear nature. In Figure 1(a), for the
sketching-based method we kept b = 216 , B = 30 for n ? 800 and B = 50 for n > 800 (larger n
requires more sketches to observe a reasonable recovery). For our algorithm, we chose b and B such
1
2
http://yining-wang.com/fftlda-code.zip
https://github.com/huanzhang12/sampling_tensor_decomp/
7
that for each n, our residual norm is on-par or better than the sketching-based method. Our algorithm
needs much less time than the sketching-based one over all dimensions. Another advantage of our
algorithm is that there are zero or very minimal preprocessing steps. In Figure 1(b), we can see how
the preprocessing time grows to prepare sketches when the dimension increases. For applications
where only the first few eigenvectors are needed, the preprocessing time could be a large overhead.
Real-life datasets Due to the small tensor dimension (200), our algorithm shows less speedup than
the sketching-based method. But it is still 2 ? 6 times faster in each of the six real-life datasets,
achieving the same squared residual norm. Table 2 reports results for one of the datasets in many
different settings of (b, B). Like in synthetic datasets, we also empirically observe that the constant b
in importance sampling is much smaller than the b used in sketching to get the same error guarantee.
b
b
b
Sketching based robust power method: n = 1200, ?i = i12
Squared residual norm
Running time (s)
Preprocessing time (s)
B
10
30
50
10
30
50
10
30
50
210
1.010
1.014
0.5437
0.6114
2.423
4.374
5.361
15.85
26.08
212
1.020
0.2271
0.1549
1.344
4.563
8.022
5.978
17.23
28.31
214
0.1513
0.1097
0.1003
4.928
15.51
27.87
8.788
24.72
40.4
16
2
0.1065
0.09242
0.08936
22.28
69.7
116.3
13.76
34.74
55.34
Importance sampling based robust power method (without prescanning): n = 1200, ?i = i12
Squared residual norm
Running time (s)
Preprocessing time (s)
B
10
30
50
10
30
50
10
30
50
5n
0.08684
0.08637
0.08639
2.595
8.3
15.46
0.0
0.0
0.0
10n
0.08784
0.08671
0.08627
4.42
13.68
25.84
0.0
0.0
0.0
20n
0.08704
0.08700
0.08618
8.02
24.51
46.37
0.0
0.0
0.0
30n
0.08697
0.08645
0.08625
11.63
35.35
66.71
0.0
0.0
0.0
40n
0.08653
0.08664
0.08611
15.19
46.12
87.24
0.0
0.0
0.0
1
Importance sampling based robust power method (with prescanning): n = 1200, ?i = i2
Squared residual norm
Running time (s)
Preprocessing time (s)
B
10
30
50
10
30
50
10
30
50
5n
0.08657
0.08684
0.08636
3.1
10.47
18
2.234
2.236
2.234
10n
0.08741
0.08677
0.08668
5.427
17.43
30.26
2.232
2.233
2.233
20n
0.08648
0.08624
0.08634
9.843
31.42
54.49
2.226
2.226
2.226
30n
0.08635
0.08634
0.08615
14.33
45.4
63.85
2.226
2.224
2.227
40n
0.08622
0.08652
0.08619
18.68
59.32
82.83
2.225
2.225
2.225
Table 1: Synthetic tensor decomposition using the robust tensor power method. We use an order-3 normalized
dense tensor with dimension n = 1200 with ? = 0.01 noise added. We run sketching-based and sampling-based
methods to find the first eigenvalue and eigenvector by setting L = 50, T = 30 and varying B and b.
Sketching based robust power method: dataset wiki, kTk2F = 2.135e+07
Squared residual norm
Running time (s)
Preprocessing time (s)
B
10
30
10
30
10
30
b
210
2.091e+07
1.951e+07
0.2346
0.8749
0.1727
0.2535
211
1.971e+07
1.938e+07
0.4354
1.439
0.2408
0.3167
212
1.947e+07
1.930e+07
1.035
2.912
0.4226
0.4275
213
1.931e+07
1.927e+07
2.04
5.94
0.5783
0.6493
14
2
1.928e+07
1.926e+07
4.577
13.93
1.045
1.121
Importance sampling based robust power method (without prescanning): dataset wiki, kTk2F = 2.135e+07
Squared residual norm
Running time (s)
Preprocessing time (s)
B
10
30
10
30
10
30
b
5n
1.931e+07
1.928e+07
0.3698
1.146
0.0
0.0
10n
1.931e+07
1.929e+07
0.5623
1.623
0.0
0.0
20n
1.935e+07
1.926e+07
0.9767
2.729
0.0
0.0
30n
1.929e+07
1.926e+07
1.286
3.699
0.0
0.0
40n
1.928e+07
1.925e+07
1.692
4.552
0.0
0.0
Importance sampling based robust power method (with prescanning): dataset wiki, kTk2F = 2.135e+07
Squared residual norm
Running time (s)
Preprocessing time (s)
B
10
30
10
30
10
30
b
5n
1.931e+07
1.930e+07
0.4376
1.168
0.01038
0.01103
10n
1.928e+07
1.930e+07
0.6357
1.8
0.0104
0.01044
20n
1.931e+07
1.927e+07
1.083
2.962
0.01102
0.01042
30n
1.929e+07
1.925e+07
1.457
4.049
0.01102
0.01043
40n
1.929e+07
1.925e+07
1.905
5.246
0.01105
0.01105
Table 2: Tensor decomposition in LDA on the wiki dataset. The tensor is generated by spectral LDA with
dimension 200 ? 200 ? 200. It is symmetric but not normalized. We fix L = 50, T = 30 and vary B and b.
8
References
[1] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. JMLR, 15(1):2773?2832, 2014.
[2] A. Anandkumar, Y.-k. Liu, D. J. Hsu, D. P. Foster, and S. M. Kakade. A spectral algorithm for
latent dirichlet allocation. In NIPS, pages 917?925, 2012.
[3] J. L. Bentley and J. B. Saxe. Generating sorted lists of random numbers. ACM Transactions on
Mathematical Software (TOMS), 6(3):359?364, 1980.
[4] S. Bhojanapalli and S. Sanghavi. A new sampling technique for tensors. CoRR, abs/1502.05023,
2015.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. JMLR, 3:993?1022, 2003.
[6] K. Bringmann and K. Panagiotou. Efficient sampling methods for discrete distributions. In
International Colloquium on Automata, Languages, and Programming, pages 133?144. Springer,
2012.
[7] J. H. Choi and S. Vishwanathan. Dfacto: Distributed factorization of tensors. In NIPS, pages
1296?1304, 2014.
[8] K. L. Clarkson, E. Hazan, and D. P. Woodruff. Sublinear optimization for machine learning. J.
ACM, 59(5):23, 2012.
[9] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an explanatory multi-modal factor analysis. 16(1):84, 1970.
[10] F. Huang, N. U. N, M. U. Hakeem, P. Verma, and A. Anandkumar. Fast detection of overlapping
communities via online tensor methods on gpus. CoRR, abs/1309.0787, 2013.
[11] U. Kang, E. E. Papalexakis, A. Harpale, and C. Faloutsos. Gigatensor: scaling tensor analysis
up by 100 times - algorithms and discoveries. In KDD, pages 316?324, 2012.
[12] D. E. Knuth. The art of computer programming. vol. 2: Seminumerical algorithms. addisonwesley. Reading, MA, pages 229?279, 1969.
[13] A. Moitra. Tensor decompositions and their applications, 2014.
[14] M. Monemizadeh and D. P. Woodruff. 1-pass relative-error lp -sampling with applications. In
SODA, pages 1143?1160, 2010.
[15] N. Pham and R. Pagh. Fast and scalable polynomial kernels via explicit feature maps. In KDD,
pages 239?247, 2013.
[16] A. H. Phan, P. Tichavsk?, and A. Cichocki. Fast alternating LS algorithms for high order
CANDECOMP/PARAFAC tensor factorizations. IEEE Transactions on Signal Processing,
61(19):4834?4846, 2013.
[17] C. E. Tsourakakis. MACH: fast randomized tensor decompositions. In SDM, pages 689?700,
2010.
[18] H.-Y. F. Tung, C.-Y. Wu, M. Zaheer, and A. J. Smola. Spectral methods for the hierarchical
dirichlet process. 2015.
[19] A. J. Walker. An efficient method for generating discrete random variables with general
distributions. ACM Transactions on Mathematical Software (TOMS), 3(3):253?256, 1977.
[20] C. Wang, X. Liu, Y. Song, and J. Han. Scalable moment-based inference for latent dirichlet
allocation. In ECML-PKDD, pages 290?305, 2014.
[21] Y. Wang. Personal communication. 2016.
[22] Y. Wang and A. Anandkumar. Online and differentially-private tensor decomposition. CoRR,
abs/1606.06237, 2016.
[23] Y. Wang, H.-Y. Tung, A. J. Smola, and A. Anandkumar. Fast and guaranteed tensor decomposition via sketching. In NIPS, pages 991?999, 2015.
9
| 6496 |@word h:1 mild:1 repository:1 version:6 mri:1 polynomial:1 norm:33 proportion:1 private:1 c0:5 hu:3 seek:1 decomposition:23 contraction:10 weekday:1 moment:1 initial:3 liu:2 selecting:1 woodruff:3 ours:1 fa8750:1 pprox:12 existing:3 current:1 com:3 si:7 kdd:2 cheap:1 remove:2 update:1 generative:1 core:1 short:1 blei:1 successive:1 zhang:1 u2i:1 mathematical:2 kvk2:2 c2:2 shorthand:1 combine:2 overhead:1 paragraph:1 indeed:3 roughly:2 pkdd:1 growing:1 multi:3 brain:1 decomposed:1 cpu:1 spain:1 estimating:1 underlying:1 notation:5 bounded:3 bhojanapalli:1 eigenvector:3 finding:2 guarantee:12 every:2 ti:23 exactly:1 um:4 k2:17 uk:3 control:1 unit:1 conductivity:1 harshman:1 harpale:1 t1:3 engineering:2 papalexakis:1 mach:1 approximately:1 chose:1 suggests:2 fastest:1 factorization:2 bi:5 practical:1 union:1 practice:1 procedure:2 empirical:3 significantly:3 confidence:2 pre:10 get:3 cannot:1 impossible:1 applying:1 restriction:1 map:1 center:1 destroying:1 independently:2 automaton:1 l:1 simplicity:1 recovery:1 immediately:1 importantly:1 orthonormal:3 regarded:1 coordinate:4 analogous:1 resp:2 imagine:1 suppose:5 user:1 exact:1 programming:2 us:4 pa:1 element:1 approximated:1 satisfying:2 asymmetric:5 i12:4 tung:4 electrical:1 wang:11 ran:4 mentioned:1 colloquium:1 ui:11 complexity:1 personal:1 depend:1 completely:1 basis:2 darpa:1 k0:3 bno:2 tichavsk:1 weekend:1 fast:8 describe:1 ktk2f:3 larger:2 solve:2 say:1 reconstruct:1 emergence:1 ip:5 final:1 online:2 advantage:1 eigenvalue:11 sdm:1 propose:1 product:11 combining:1 achieve:11 frobenius:4 kv:1 scalability:1 differentially:1 produce:1 generating:7 telgarsky:1 converges:1 help:1 depending:1 eq:1 p2:1 recovering:1 come:3 implies:2 indicate:1 attribute:2 saxe:2 bin:4 require:2 hx:1 fix:3 suffices:1 decompose:1 bi1:1 extension:1 pl:2 hold:3 pham:1 sufficiently:6 considered:1 bj:1 vary:1 smallest:1 prepare:1 utexas:1 largest:1 correctness:1 create:2 tool:1 gaussian:1 rather:2 pn:10 season:1 ej:9 asp:1 varying:1 parafac:3 focus:3 improvement:1 vk:2 rank:9 cg:1 detect:1 inference:1 entire:4 typically:1 explanatory:1 hidden:1 subroutine:3 i1:2 sketched:1 issue:1 arg:1 almaden:2 denoted:1 art:1 field:1 once:1 never:1 having:1 equal:1 sampling:37 ng:1 adversarially:1 look:1 k2f:25 sanghavi:1 t2:1 np:2 report:1 few:3 maxj:1 replaced:1 n1:3 ab:3 detection:1 mining:1 ai1:1 yining:1 tj:16 kt:2 capable:1 necessary:1 decompostion:1 huan:1 orthogonal:7 vely:3 addisonwesley:1 walk:1 re:4 theoretical:5 minimal:1 earlier:1 modeling:1 entry:8 masked:1 uniform:2 sublinearity:1 kn:1 answer:1 scanning:10 synthetic:8 chooses:1 thanks:1 international:1 randomized:2 contract:1 off:1 pagh:1 together:1 quickly:3 sketching:24 na:3 w1:1 again:2 squared:11 moitra:1 monemizadeh:1 huang:1 wkf:1 ek:1 zhao:1 creating:1 rescaling:1 return:3 li:2 account:1 wk:9 explicitly:1 vi:16 depends:1 performed:1 later:1 lot:1 view:1 observing:1 sup:2 hazan:1 decaying:1 complicated:1 contribution:2 air:1 square:1 accuracy:2 variance:4 efficiently:1 generalize:1 xli:1 accurately:1 explain:3 reach:1 whenever:1 eigengaps:1 naturally:1 hsu:2 dpwoodru:1 dataset:6 popular:1 ut:4 appears:2 higher:2 day:1 tom:2 modal:2 done:2 though:2 furthermore:1 just:1 implicit:1 smola:2 spiky:1 correlation:1 sketch:10 ei:2 replacing:2 overlapping:1 mode:1 lda:6 perhaps:1 grows:1 bentley:2 usa:3 k22:1 normalized:3 unbiased:1 true:1 read:4 symmetric:16 laboratory:1 alternating:1 i2:8 indistinguishable:1 during:1 davis:1 qe:4 stress:1 demonstrate:1 tn:3 performs:1 cp:2 l1:2 dfacto:1 wise:4 consideration:1 empirically:5 anisotropic:1 extend:2 discussed:1 approximates:1 kwk2:2 refer:4 significant:1 shuffling:1 rd:2 outlined:2 xdata:1 similarly:1 cancellation:3 language:1 access:1 han:1 polyadic:1 etc:1 base:2 recent:3 touching:1 belongs:1 store:1 certain:2 arbitrarily:1 success:1 life:4 ampling:1 additional:2 zip:1 signal:2 gigatensor:1 full:4 multiple:1 faster:7 long:1 dept:2 divided:1 concerning:2 prediction:1 scalable:2 expectation:3 arxiv:1 iteration:13 represent:2 kernel:1 achieved:1 c1:2 addition:2 whereas:2 else:1 median:8 source:1 walker:1 rest:1 unlike:2 enron:1 pass:1 subject:1 kwk22:3 jordan:1 anandkumar:6 vw:7 enough:2 iterate:1 inner:10 idea:4 reduce:1 texas:1 qj:7 i7:1 whether:1 six:1 gb:1 song:2 clarkson:1 afford:1 remark:2 involve:1 eigenvectors:7 hardware:1 generate:1 wiki:6 http:2 exist:2 bringmann:1 canonical:1 notice:1 sign:4 neuroscience:1 estimated:2 per:6 rb:1 discrete:2 vol:1 four:1 nevertheless:1 achieving:1 vbi:1 diffusion:2 ht:1 kept:2 v1:7 sum:1 run:5 jose:1 inverse:2 powerful:2 soda:1 almost:1 reader:4 decide:1 reasonable:1 wu:1 pbi:1 scaling:1 vb:2 ki:3 bound:9 pay:1 followed:1 distinguish:1 guaranteed:1 activity:1 strength:2 occur:1 vishwanathan:1 n3:8 x2:1 software:2 generates:1 u1:2 speed:1 argument:1 min:6 uvw:5 relatively:1 gpus:1 speedup:3 according:2 smaller:7 slightly:1 across:1 wi:4 kakade:2 lp:1 rob:4 making:2 discus:1 deflation:2 needed:3 know:1 ge:1 operation:1 apply:3 observe:4 hierarchical:1 v2:2 spectral:7 alternative:1 faloutsos:1 top:1 dirichlet:7 assumes:2 running:15 remaining:1 bkt:1 uj:5 approximating:4 unchanged:1 bl:1 tensor:114 question:1 quantity:2 occurs:1 added:3 looked:1 visiting:1 unclear:1 outer:1 topic:3 trivial:1 provable:1 length:3 code:3 index:3 modeled:1 ratio:1 setup:1 wk2f:1 implementation:2 tsourakakis:1 perform:3 upper:1 datasets:12 markov:1 ecml:1 relational:1 extended:1 strain:1 communication:1 rn:11 community:1 david:1 bk:2 namely:1 c3:2 california:1 kang:1 ucdavis:1 barcelona:1 boost:2 nip:5 below:7 usually:1 candecomp:2 reading:7 bbi:13 built:1 including:1 max:5 explanation:1 memory:1 power:19 critical:1 natural:1 force:1 residual:13 scheme:2 github:2 x2i:1 kvk22:4 cichocki:1 discovery:1 kf:9 relative:1 par:1 permutation:3 sublinear:22 proportional:2 allocation:6 foundation:1 sufficient:1 verification:1 foster:1 verma:1 pi:4 share:1 ibm:3 austin:1 succinctly:1 supported:1 formal:1 taking:3 face:1 absolute:2 sparse:1 distributed:1 slice:31 dimension:17 world:1 computes:1 symmetrize:1 san:1 preprocessing:15 transaction:3 bb:10 approximate:7 implicitly:2 keep:2 dealing:1 knew:1 tuples:1 latent:8 table:7 additionally:1 learn:1 ku:3 robust:18 nature:1 vj:8 pk:8 main:10 dense:1 noise:9 arise:1 n2:10 vbj:1 succinct:1 body:1 x1:1 referred:1 en:3 kuk22:3 sub:1 explicit:1 xl:1 x1i:1 pe:2 jmlr:2 third:2 hti:1 rk:7 kuk2:4 theorem:24 choi:1 kvi:1 list:4 deflated:1 decay:4 exists:2 sequential:1 corr:3 importance:20 knuth:2 magnitude:1 illustrates:1 occurring:1 nk:3 gap:1 phan:1 lt:1 hakeem:1 recommendation:2 u2:2 springer:1 corresponds:2 satisfies:3 acm:3 ma:1 goal:1 sorted:3 careful:1 replace:1 hard:1 typical:1 except:2 uniformly:2 lemma:7 zaheer:1 total:8 pas:3 indicating:1 scan:2 l1i:1 evaluate:1 |
6,076 | 6,497 | Neural Universal Discrete Denoiser
Taesup Moon
DGIST
Daegu, Korea 42988
tsmoon@dgist.ac.kr
Seonwoo Min, Byunghan Lee, Sungroh Yoon
Seoul National University
Seoul, Korea 08826
{mswzeus, styxkr, sryoon}@snu.ac.kr
Abstract
We present a new framework of applying deep neural networks (DNN) to devise a
universal discrete denoiser. Unlike other approaches that utilize supervised learning
for denoising, we do not require any additional training data. In such setting, while
the ground-truth label, i.e., the clean data, is not available, we devise ?pseudolabels? and a novel objective function such that DNN can be trained in a same way
as supervised learning to become a discrete denoiser. We experimentally show that
our resulting algorithm, dubbed as Neural DUDE, significantly outperforms the
previous state-of-the-art in several applications with a systematic rule of choosing
the hyperparameter, which is an attractive feature in practice.
1
Introduction
Cleaning noise-corrupted data, i.e., denoising, is a ubiquotous problem in signal processing and
machine learning. Discrete denoising, in particular, focuses on the cases in which both the underlying
clean and noisy data take their values in some finite set. Such setting covers several applications
in different domains, such as image denoising [1, 2], DNA sequence denoising [3], and channel
decoding [4].
A conventional approach for addressing the denoising problem is the Bayesian approach, which can
often yield a computationally efficient algorithm with reasonable performance. However, limitations
can arise when the assumed stochastic models do not accurately reflect the real data distribution.
Particularly, while the models for the noise can often be obtained relatively reliably, obtaining the
accurate model for the original clean data is more tricky; the model for the clean data may be wrong,
changing, or may not exist at all.
In order to alleviate the above mentioned limitations, [5] proposed a universal approach for discrete
denoising. Namely, they first considered a general setting that the clean finite-valued source symbols
are corrupted by a discrete memoryless channel (DMC), a noise mechanism that corrupts each source
symbol independently and statistically identically. Then, they devised an algorithm called DUDE
(Discrete Universal DEnoiser) and showed rigorous performance guarantees for the semi-stochastic
setting; namely, that where no stochastic modeling assumptions are made on the underlying source
data, while the corruption mechanism is assumed to be governed by a known DMC. DUDE is shown
to universally attain the optimum denoising performance for any source data as the data size grows.
In addition to the strong theoretical performance guarantee, DUDE can be implemented as a computationally efficient sliding window denoiser; hence, it has been successfully applied and extended
to some practical applications, e.g., [1, 3, 4, 2]. However, it also had limitations; namely, the performance is sensitive on the choice of sliding window size k, which has to be hand-tuned without
any systematic rule. Moreover, when k becomes large and the alphabet size of the signal increases,
DUDE suffers from the data sparsity problem, which significantly deteriorates the performance.
In this paper, we present a novel framework of addressing above limitations of DUDE by adopting
the machineries of deep neural networks (DNN) [6], which recently have seen great empirical success
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
in many practical applications. While there have been some previous attempts of applying neural
networks to grayscale image denoising [7, 8], they all remained in supervised learning setting, i.e.,
large-scale training data that consists of clean and noisy image pairs was necessary. Such approach
requires significant computation resources and training time and is not always transferable to other
denoising applications, in which collecting massive training data is often expensive, e.g., DNA
sequence denoising [9].
Henceforth, we stick to the setting of DUDE, which requires no additional data other than the given
noisy data. In this case, however, it is not straightforward to adopt DNN since there is no ground-truth
label for supervised training of the networks. Namely, the target label that a denoising algorithm
is trying to estimate from the observation is the underlying clean signal, hence, it can never be
observed to the algorithm. Therefore, we carefully exploit the known DMC assumption and the
finiteness of the data values, and devise ?pseudo-labels? for training DNN. They are based on the
unbiased estimate of the true loss a denoising algorithm is incurring, and we show that it is possible
to train a DNN as a universal discrete denoiser using the devised pseudo-labels and generalized
cross-entropy objective function. As a by-product, we also obtain an accurate estimator of the true
denoising performance, with which we can systematically choose the appropriate window size k. In
results, we experimentally verify that our DNN based denoiser, dubbed as Neural DUDE, can achieve
significantly better performance than DUDE maintaining robustness with respect to k. Furthermore,
we note that although the work in this paper is focused on discrete denoising, we believe the proposed
framework can be extended to the denoising of continuous-valued signal as well, and we defer it to
the future work.
2
2.1
Notations and related work
Problem setting of discrete denoising
Throughout this paper, we will generally denote a sequence (n-tuple) as, e.g., an = (a1 , . . . , an ),
and aji refers to the subsequence (ai , . . . , aj ). In discrete denoising problem, we denote the clean,
underlying source data as xn and assume each component xi takes a value in some finite set X . The
source sequence is corrupted by a DMC and results in a noisy version of the source z n , of which
each component zi takes a value in , again, some finite set Z. The DMC is completely characterized
by the channel transition matrix ? ? R|X |?|Z| , of which the (x, z)-th element, ?(x, z), stands for
Pr(Zi = z|Xi = x), i.e., the conditional probability of the noisy symbol taking value z given the
original source symbol was x. An essential but natural assumption we make is that ? is of the full
row rank.
Upon observing the entire noisy data z n , a discrete denoiser reconstructs the original data with
? n = (X
? 1 (z n ), . . . , X
? n (z n )), where each reconstructed symbol X
? i (z n ) also takes its value in a
X
?
? n is measured by the average
finite set X . The goodness of the reconstruction by a discrete denoiser X
Pn
1
n
n
n
?
loss, LX? n (X , Z ) = n i=1 ?(xi , Xi (z )), where ?(xi , x
?i ) is a single-letter loss function that
measures the loss incurred by estimating xi with x
?i at location i. The loss function can be also
?
represented with a loss matrix ? ? R|X |?|X | . Throughout the paper, for simplicity, we will assume
?
X = Z = X , thus, assume that ? is invertible.
2.2
Discrete Universal DEnoiser (DUDE)
DUDE in [5] is a two-pass algorithm that has a linear complexity in the data size n. During the first
pass, the algorithm with the window size k collects the statistics vector
m[z n , lk , rk ](a) ={i : k + 1 ? i ? n ? k, z i+k = lk ark },
(1)
i?k
for all a ? Z, which is the count of the occurrence of the symbol a ? Z along the noisy sequence z n
that has the double-sided context (lk , rk ) ? Z 2k . Once the m vector is collected, for the second pass,
DUDE applies the rule
? i,DUDE (z n ) = arg min m[z n , ci ]> ??1 [?x? ?z ] for each k + 1 ? i ? n ? k,
X
i
x
??X
(2)
i?1 i+k
where ci , (zi?k
, zi+1 ) is the context of zi , ?zi is the zi -th column of the channel matrix ?, ?x? is
the x
?-th column of the loss matrix ?, and stands for the element-wise product. The form of (2)
2
shows that DUDE is a sliding window denoiser with window size 2k + 1; namely, DUDE returns the
i+k
same denoised symbol at all locations i?s with the same value of zi?k
. We will call such denoisers as
the k-th order sliding window denoiser from now on.
DUDE is shown to be universal, i.e., for any underlying clean sequence xn , it can always attain the
performance of the best k-th order sliding window denoiser as long as k|Z|2k = o(n/ log n) holds
[5, Theorem 2]. For more rigorous analyses, we refer to the original paper [5].
2.3
Deep neural networks (DNN) and related work
Deep neural networks (DNN), often dubbed as deep learning algorithms, have recently made significant impacts in several practical applications, such as speech recognition, image recognition, and
machine translation, etc. For a thorough review on recent progresses of DNN, we refer the readers to
[6] and refereces therein.
Regarding denoising, [7, 8, 10] have successfully applied the DNN to grayscale image denoising by
utilizing supervised learning at the small image patch level. Namely, they generated clean and noisy
image patches and trained neural networks to learn a mapping from noisy to clean patches. While
such approach attained the state-of-the-art performance, as mentioned in Introduction, it has several
limitations. That is, it typically requires massive amount of training data, and multiple copies of the
data need to be generated for different noise types and levels to achieve robust performance. Such
requirement of large training data cannot be always met in other applications, e.g., in DNA sequence
denoising, collecting large scale clean DNA sequences is much more expensive than obtaining
training images on the web. Moreover, for image denoising, working in the small patch level makes
sense since the image patches may share some textual regularities, but in other applications, the
characterstics of the given data for denoising could differ from those in the pre-collected training set.
For instance, the characteristics of substrings of DNA sequences vary much across different species
and genes, hence, the universal setting makes more sense in DNA sequence denoising.
3
An alternative interpretation of DUDE
3.1
Unbiased estimated loss
In order to make an alternative interpretation of DUDE, which can be also found in [11], we need the
tool developed in [12]. To be self-contained, we recap the idea here. Consider a single letter case,
namely, a clean symbol x is corrupted by ? and resulted in the noisy observation1 Z. Then, suppose
? = s(Z). In this
a single-symbol denoiser s : Z ? X? is applied and obtained the denoised symbol X
case, the true loss incurred by s for the clean symbol x and the noisy observation Z is ?(x, s(Z)). It
is clear that s cannot evaluate its loss since it does not know what x is, but the following shows an
unbiased estimate of the expected true loss, which is only based on Z and s, can be derived.
First, denote S as the set of all possible single-symbol denoisers. Note |S| = |X? ||Z| . Then, we define
a matrix ? ? R|X |?|S| with
X
?(x, s) =
?(x, z)?(x, s(z)) = Ex ?(x, s(Z)), x ? X , s ? S.
(3)
z?Z
Then, we can define an estimated loss matrix2 L , ??1 ? ? R|Z|?|S| . With this definition, we can
show that L(Z, s) is an unbiased estimate of Ex ?(x, s(Z)) as follows (as shown in [12]):
X
X
Ex L(Z, s) =
?(x, z)
??1 (z, x0 )?(x0 , s) = ?(x, x0 )?(x0 , s) = ?(x, s) = Ex ?(x, s(Z)).
z
3.2
x0
DUDE: Minimizing the sum of estimated losses
As mentioned in Section 2.2, DUDE with context size k is the k-th order sliding window denoiser.
Generally, we can denote such k-th order sliding window denoiser as sk : Z 2k+1 ? X? , which
1
2
We use uppercase letter Z to stress it is a random variable
For general case in which ? is not a square matrix, ??1 can be replaced with the right inverse of ?.
3
obtains the reconstruction at the i-th location as
? i (z n ) = sk (z i+k ) = sk (ci , zi ).
X
i?k
(4)
i?1 i+k
(zi?k
, zi+1 ).
To recall, ci =
Now, from the formulation (4), we can interpret that sk defines a
single-symbol denoiser at location i, i.e., sk (ci , ?), depending on ci . With this view on sk , as derived
in [11], we can show that the DUDE defined in (2) is equivalent to finding a single-symbol denoiser
X
sk,DUDE (c, ?) = arg min
L(zi , s),
(5)
s?S
k
k
k
k
{i:ci =c}
2k
for each context c ? Ck , {(l , r ) : (l , r ) ? Z } and obtaining the reconstruction at location i
? i,DUDE (z n ) = sk,DUDE (ci , zi ). The interpretation (5) gives some intuition on why DUDE enjoys
as X
strong
theoretical guarantees in [5]; sincePL(Zi , s) is an unbiased estimate of Exi ?(xi , s(Zi )),
P
i?{i:ci =c} L(Zi , s) will concentrate on
i?{i:ci =c} ?(xi , s(Zi )) as long as |{i : ci = c}| is
sufficiently large. Hence, the single symbol denoiser that minimizes the sum of the estimated losses
for each c (i.e., (5)) will also make the sum of the true losses small, which is the goal of a denoiser.
We can also express (5) using vector notations, which will become useful for deriving the Neural
DUDE in the next section. That is, we let ?|S| be a probability simplex in R|S| . (Suppose we have
uniquely assigned each coordinate of R|S| to each single-symbol denoiser in S from now on.) Then,
we can define a probability vector for each c,
X
? (c) , arg min
p
1>
(6)
zi L p,
p??|S|
{i:ci =c}
|S|
which will be on the vertex of ? that corresponds to sk,DUDE (c, ?) in (5). The reason is because
the objective function in (6) is a linear function in p. Hence, we can simply obtain sk,DUDE (c, ?) =
? (c)s , where p
? (c)s stands for the s-th coordinate of p
? (c).
arg maxs p
4
Neural DUDE: A DNN based discrete denoiser
As seen in the previous section, DUDE utilizes the estimated loss matrix L, which does not depend
on the clean sequence xn . However, the main drawback of DUDE is that, as can be seen in (5), it
treats each context c independently from others. Namely, when the context size k grows, then the
number of different contexts |Ck | = |Z|2k will grow exponentially with k, hence, the sample size
for each context |{i : ci = c}| will decrease
Pexponentially for a given sequence length n. Such
phenomenon will hinder the concentration of i?{i:ci =c} L(Zi , s) mentioned in the previous section,
which causes the performance of DUDE deteriorate when k grows too large.
In order to resolve above problem, we develop Neural DUDE, which adopts a single neural network
such that the information from similar contexts can be shared via network parameters. We note that
our usage of DNN resembles that of the neural language model (NLM) [13], which improved upon
the conventional N -gram models. The difference is that NLM is essentially a prediction problem,
hence the ground truth label for supervised training is easily availble, but in denoising, this is not the
case. Before describing the algorithm more in detail, we need one following lemma.
4.1
A lemma
|S|
Let R+ be the space of all |S|-dimensional vectors of which elements are nonnegative. Then, for any
P|S|
|S|
g ? R+ and any p ? ?|S| , define a cost function C(g, p) , ? i=1 gi log pi , i.e., a generalized
cross-entropy function with the first argument not normalized to a probability vector. Note C(g, p) is
linear in g and convex in p. Now, following lemma shows another way of obtaining DUDE.
Lemma 1 Define Lnew , ?L + Lmax 11> in which Lmax , maxz,s L(z, s), the maximum element
of L. Using the cost function C(?, ?) defined above, for each c ? Ck , let us define
X
p? (c) , arg min
C L>
new 1zi , p .
p??|S|
{i:ci =c}
Then, we have sk,DUDE (c, ?) = arg maxs p? (c)s .
Proof: The proof of lemma is given in the Supplementary Material.
4
4.2
Neural DUDE
The main idea for Neural DUDE is to use a single neural network to learn the k-th order slinding
window denoising rule for all c?s. Namely, we define p(w, ?) : Z 2k ? ?|S| as a feed-forward
neural network that takes the context vector c ? Ck as input and outputs a probability vector on
?|S| . We let w stand for all the parameters in the network. The network architecture of p(w, ?) has
the softmax output layer, and it is analogous to that used for the multi-class classification. Thus,
when the parameters are properly learned, we expect that p(w, ci ) will give predictions on which
single-symbol denoiser to apply at location i with the context ci .
4.2.1
Learning
When not resorting to the supervised learning framework, learning the network parameters w is not
straightforward as mentioned in the Introduction. However, inspired by Lemma 1, we define the
objective function to minimize for learning w as
n
1X >
L(w, z n ) ,
C Lnew 1zi , p(w, ci ) ,
(7)
n i=1
which resembles the widely used cross-entropy objective function in supervised multi-class classifin
n
cation. Namely, in (7), {(ci , L>
new 1zi )}i=1 , which solely depends on the noisy sequence z , can be
analogously thought of as the input-label pairs in supervised learning. (Note for i ? k and i ? n ? k,
dummy variables are padded for obtaining ci .) But, unlike classification, in which the ground-truth
|S|
label is given as a one-hot vector, we treat L>
new 1zi ? R+ as a target ?pseudo-label? on S.
Once the objective function is set as in (7), we can then use the widely used optimization techniques,
namely, the back-propagation and Stochastic Gradient Descent (SGD)-based methods, for learning
the parameters w. In fact, most of the well-known improvements to the SGD method, such as the
momentum [14], mini-batch SGD, and several others [15, 16], can be all used for learning w. Note
that there is no notion of generalization in our setting, since the goal of denoising is to simply achieve
as small average loss as possible for the given noisy sequence z n , rather than performing well on the
separate unseen test data. Hence, we do not use any regularization techniques such as dropout in our
learning, but simply try to minimize the objective function.
4.2.2
Denoising
After sufficient iterations of weight updates, the objective function (7) will converge, and we will
denote the converged parameters as w? . The Neural DUDE algorithm then applies the resulting
network p(w? , ?) to the exact same noisy sequence z n used for learning to denoise. Namely, for each
c ? Ck , we obtain a single-symbol denoiser
sk,Neural
DUDE (c, ?)
= arg max p(w? , c)s
(8)
s
? i,DUDE (z n ) = sk,Neural
and the reconstruction at location i by X
DUDE (ci , zi ).
From the objective function (7) and the definition (8), it is apparent that Neural DUDE does share
information across different contexts since w? is learnt from all data and shared across all contexts.
Such property enables Neural DUDE to robustly run with much larger k?s than DUDE without
running into the data sparsity problem. As shown in the experimental section, Neural DUDE with
large k can significantly improve the denoising performance compared to DUDE. Furthermore, in the
experimental section, we show that the concentration
n
1X
L(Zi , sk,Neural
n i=1
n
DUDE (ci , ?))
?
1X
?(xi , sk,Neural
n i=1
DUDE (ci , Zi ))
(9)
holds with high probability even for very large k?s, whereas such concentration quickly breaks for
DUDE as k grows. While deferring the analyses on why such concentration always holds to the future
work, we can use the property to provide a systematic mechanism
for choosing the best context size
Pn
k for Neural DUDE - simply choose k ? = arg mink n1 i=1 L(Zi , sk,Neural DUDE (ci , ?)). As shown
in the experiments, such choice of k for Neural DUDE gives an excellent denoising performace.
Algorithm 1 summarizes the Neural DUDE algorithm.
5
Algorithm 1 Neural DUDE algorithm
Input: Noisy sequence z n , ?, ?, Maximum context size kmax
n
n n
? Neural
?
Output: Denoised sequence X
DUDE = {Xi,Neural DUDE (z )}i=1
Compute L = ??1 ? as in Section 3.1 and Lnew as in Lemma 1
for k = 1, . . . , kmax do
Initialize p(w, ?) with input dimension 2k|Z| (using one-hot encoding of each noisy symbol)
Obtain wk? minimizing L(w, z n ) in (7) using SGD-like optimization method
Obtain sk,Neural DUDE (c, ?) for all c ? Ck as in (8) using wk?
Pn
Compute Lk , n1 i=1 L(zi , sk,Neural DUDE (ci , ?))
end for
? i,Neural DUDE (z n ) = sk? ,Neural DUDE (ci , zi ) for i = 1, . . . , n
Get k ? = arg mink Lk and obtain X
Remark: We note that
Pnusing the cost function in (7) is important. That is, if we use a simpler
objective like (5), n1 i=1 (L> 1zi )> p(w, ci ), it becomes highly non-convex in w, and the solution
w? becomes very unstable. Moreover, using Lnew instead of L in the cost function is important as
well, since it guarantees to have the cost function C(?, ?) always convex in the second argument.
5
Experimental results
In this section, we show the denoising results of Neural DUDE for the synthetic binary data, real
binary images, and real Oxford Nanopore MinION DNA sequence data. All of our experiments were
done with Python 2.7 and Keras package (http://keras.io) with Theano [17] backend.
5.1
Synthetic binary data
We first experimented with a simple synthetic binary data to highlight the core strength of Neural
DUDE. That is, we assume X = Z = X? = {0, 1} and ? is a binary symmetric channel (BSC)
with crossover probability ? = 0.1. We set ? as the Hamming loss. We generated the clean binary
0.90
0.9
DUDE
Neural DUDE (1L)
Neural DUDE (2L)
Neural DUDE (3L)
Neural DUDE (4L)
FB Recursion
0.75
0.70
0.65
0.563?
0.60
0.55
0.50
2
4
6
8
10
12
0.7
0.6
0.5
0.4
0.3
BER
Estimated BER
FB Recursion
0.2
0.1
0.558?
0
14
Window size k
(a) BER/? vs. Window size k
0.0
BER
Estimated BER
FB Recursion
0.64
0.8
(Bit Error Rate) / ?
0.80
(Bit Error Rate) / ?
(Bit Error Rate) / ?
0.85
0
2
4
0.62
0.60
0.58
0.56
0.54
6
8
Window size k
(b) DUDE
10
12
14
0.52
0
2
4
6
8
10
12
14
Window size k
(c) Neural DUDE (4L)
Figure 1: Denoising results of DUDE and Neural DUDE for the synthetic binary data with n = 106 .
sequence xn of length n = 106 from a binary symmentric Markov chain (BSMC) with transition
probability ? = 0.1. The noise-corrupted sequence z n is generated by passing xn through ?. Since
? n , 1 Pn ?(xi , X
? i (z n )), is equal to the
we use the Hamming loss, the average loss of a denoiser X
i=1
n
n
bit error rate (BER). Note that in this setting, the noisy sequence z is a hidden Markov process.
Therefore, when the stochastic model of the clean sequence is exactly known to the denoiser, the
Viterbi-like Forward-Backward (FB) recursion algorithm can attain the optimum BER.
Figure 1 shows the denoising results of DUDE and Neural DUDE, which do not know anything
about the characteristics of the clean sequence xn . For DUDE, the window size k is the single
hyperparameter to choose. For Neural DUDE, we used the feed-forward fully connected neural
networks for p(w, ?) and varied the depth of the network between 1 ? 4 while also varying k. Neural
DUDE(1L) corresponds to the simple linear softmax regression model. For deeper models, we used
40 hidden nodes in each layer with Rectified Linear Unit (ReLU) activations. We used Adam [16]
with default setting in Keras as an optimizer to minimize (7). We used the mini-batch size of 100 and
ran 10 epochs for learning. The performance of Neural DUDE was robust to the initializtion of the
parameters w.
6
Figure 1(a) shows the BERs of DUDE and Neural DUDE with respect to varying k. Firstly, we see
that minimum BERs of both DUDE and Neural DUDE(4L), i.e., 0.563? with k = 5, get very close
to the optimum BER (0.558?) obtained by the Forward-Backward (FB) recursion. Secondly, we
observe that Neural DUDE quickly approaches the optimum BER as we increase the depth of the
network. This shows that as the descriminative power of the model increases with the depth of the
network, p(w, ?) can successfully learn the denoising rule for each context c with a shared parameter
w. Thirdly, we clearly see that in contrast to the performance of DUDE being sensitive to k, that
of Neural DUDE(4L) is robust to k by sharing information across contexts. Such robustness with
respect to k is obviously a very desirable property in practice.
Pn
Figure 1(b) and Figure 1(c) plot the average estimated BER, n1 i=1 L(Zi , sk (ci , ?)), against the
true BER for DUDE and Neural DUDE (4L), respectively, to show the concentration phenomenon
described in (9). From the figures, we can see that while the estimated BER drastically diverges from
true BER for DUDE as k increases, it strongly concentrates on true BER for Neural DUDE (4L) for
all k. This result suggests the concrete rule for selecting the best k described in Algorithm 1. Such
rule is used for the experiments using real data in the following subsections.
5.2
Real binary image denoising
In this section, we experiment with real, binary image data. The settings of ? and ? are identical
to Section 5.1, while the clean sequence was generated by converting image to a 1-D sequence via
raster scanning. We tested with 5 representative binary images with various textual characteristics:
Einstein, Lena, Barbara, Cameraman, and scanned Shannon paper. Einstein and Shannon images had
the resolution of 256 ? 256 and the rest had 512 ? 512. For Neural DUDE, we tested with 4 layer
model with 40 hidden nodes with ReLU activations in each layer.
0.9
DUDE BER
Neural DUDE(4L) BER
Neural DUDE(4L) Est. BER
(Bit Error Rate) / ?
0.8
0.7
0.563?
0.404?
0.6
0.5
0.4
0.3
0
5
10
15
20
25
30
35
40
Window size k
(a) Clean image
(b) BER results
Figure 2: Einstein image(256 ? 256) denoising results with ? = 0.1.
Figure 2(b) shows the result of denoising Einstein image in Figure 2(a) for ? = 0.1. We see that
the BER of Neural DUDE(4L) continues to drop as we increase k, whereas DUDE quickly fails
to denoise for larger k?s. Furthermore, we observe that the estimated BER of Neural DUDE(4L)
again strongly correlates with the true BER. Note that when k = 36, we have 272 possible different
contexts, which are much more than the number of pixels, 216 (256 ? 256). However, we see that
Neural DUDE can still learn a good denoising rule from such many different contexts by aggregating
information from similar contexts.
?
0.15
0.1
Schemes
DUDE
Neural DUDE
Improvement
DUDE
Neural DUDE
Improvement
Einstein
0.578 (5)
0.384 (38)
33.6%
0.563 (5)
0.404 (36)
28.2%
Lena
0.494 (6)
0.405 (38)
18.0%
0.495 (6)
0.403 (38)
18.6%
Barbara
0.492 (5)
0.448 (33)
9.0%
0.506 (6)
0.457 (27)
9.7%
Cameraman
0.298 (6)
0.264 (39)
11.5%
0.310 (5)
0.268 (35)
13.6%
Shannon
0.498 (5)
0.410 (38)
17.7%
0.475 (5)
0.402 (35)
15.4%
Table 1: BER results for binary images. Each number represents the relative BER compared to ? and
the ?Improvement? stands for the relative BER improvement of Neural DUDE(4L) over DUDE. The
numbers inside parentheses are the k values achieving the result.
Table 1 summarizes the denoising results on six binary images for ? = 0.1, 0.15. We see that Neural
DUDE always significantly outperforms DUDE using much larger context size k. We believe this is a
7
significant result since DUDE is shown to outperform many state-of-the-art sliding window denoisers
in practice such as median filters [5, 1]. Furthermore, following DUDE?s extension to grayscale
image denoising [2], the result gives strong motivation for extending Neural DUDE to grayscale
image denoising.
5.3
Nanopore DNA sequence denoising
We now go beyond binary data and apply Neural DUDE to DNA sequence denoising. As surveyed
in [9], denoising DNA sequences is becoming increasingly important as the sequencing devices are
getting cheaper, but injecting more noise than before. For our experiment, we used simulated MinION
Nanopore reads, which were generated as follows; we obtained 16S rDNA reference sequences for
20 species [18] and randomly generated noiseless template reads from them. The number of reads
and read length for each species were set as identical to those of real MinION Nanopore reads [18].
Then, based on ? of MinION Nanopore sequencer (Figure 3(a)) obtained in [19] (with 20.375%
average error rate), we induced substitution errors to the reads and obtained the corresponding noisy
reads. Note that we are only considering substitution errors, while there also exist insertion/deletion
errors in real Nanopore sequenced data. The reason is that substitution errors can be directly handled
by DUDE and Neural DUDE, so we focus on quantitatively evaluating the performance on those
errors. We sequentially merged 2,372 reads from 20 species and formed 1-D sequence of 2,469,111
base pairs long. We used two Neural DUDE (4L) models with 40 and 80 hidden nodes in each layer,
and denoted as (40-40-40) and (80-80-80), respectively.
1.1
DUDE
Neural DUDE (40-40-40)
Neural DUDE (80-80-80)
(Error Rate) / ?
1.0
0.9
0.8
0.909?
0.7
0.544?
0.6
0.5
0.4
0.3
0.427?
0
20
40
60
80
100
Window size k
(a) ? for nanopore sequencer
(b) BER results
Figure 3: Nanopore DNA sequence denoising results.
Figure 3(b) shows the denoising results. We observe that Neural DUDE with large k?s (around
k = 100) can achieve less than half of the error rate of DUDE. Furthermore, as the complexity
of model increases, the performance of Neural DUDE gets significantly better. We could not find
a comparable baseline scheme, since most of nanopore error correction tool, e.g., Nanocorr [20],
did not produce read-by-read correction sequence, but returns downstream analyses results after
denoising. Coral [21], which gives read-by-read denoising result for Illumina data, completely failed
for the nanopore data. Given that DUDE ourperforms state-of-the-art schemes, including Coral, for
Illumina sequenced data as shown in [3], we expect the improvement of Neural DUDE over DUDE
could translate into fruitful downstream analyses gain for nanopore data.
6
Concluding remark and future work
We showed Neural DUDE significantly improves upon DUDE and has a systematic mechanism for
choosing the best k. There are several future research directions. First, we plan to do thorough
experiments on DNA sequence denoising and quantify the impact of Neural DUDE in the downstream
analysis. Second, we plan to give theoretical analyses on the concentration (9) and justify the derived
k selection rule. Third, extending the framework to deal with continuous-valued signal and finding
connection with SURE principle [22] would be fruitful. Finally, applying recurrent neural networks
(RNN) in place of DNNs could be another promising direction.
Acknowledgments
T. Moon was supported by DGIST Faculty Start-up Fund (2016010060) and Basic Science Research
Program through the National Research Foundation of Korea (2016R1C1B2012170), both funded by
Ministry of Science, ICT and Future Planning. S. Min, B. Lee, and S. Yoon were supported in part by
Brain Korea 21 Plus Project (SNU ECE) in 2016.
8
References
[1] E. Ordentlich, G. Seroussi, S. Verd?, M.J. Weinberger, and T. Weissman. A universal discrete
image denoiser and its application to binary images. In IEEE ICIP, 2003.
[2] Giovanni Motta, Erik Ordentlich, Ignacio Ramirez, Gadiel Seroussi, and Marcelo J. Weinberger.
The iDUDE framework for grayscale image denoising. IEEE Trans. Image Processing, 20:1?21,
2011.
[3] B. Lee, T. Moon, S. Yoon, and T. Weissman. DUDE-Seq: Fast, flexible, and robust denoising
of nucleotide sequences. arXiv:1511.04836, 2016.
[4] E. Ordentlich, G. Seroussi, S. Verd?, and K. Viswanathan. Universal algorithms for channel
decoding of uncompressed sources. IEEE Trans. Inform. Theory, 54(5):2243?2262, 2008.
[5] T. Weissman, E. Ordentlich, G. Seroussi, S. Verd?, and M. J. Weinberger. Universal discrete
denoising: Known channel. IEEE Trans. on Inform. Theory, 51(1):5?28, 2005.
[6] G. Hinton, Y. LeCun, and Y. Bengio. Deep learning. Nature, 521:436?444, 2015.
[7] H. Burger, C. Schuler, and S. Harmeling. Image denoising: Can plain neural networks compete
with BM3D? In CVPR, 2012.
[8] J. Xie, L. Xu, and E. Chen. Image denoising and inpainting with deep neural networks. In NIPS,
2012.
[9] D. Laehnemann, A. Borkhardt, and A.C. McHardy. Denoising DNA deep sequencing data?
high-throughput sequencing errors and their corrections. Brief Bioinform, 17(1):154?179, 2016.
[10] V. Jain and H.S. Seung. Natural image denoising with convolutional networks. In NIPS, 2008.
[11] T. Moon and T. Weissman. Discrete denoising with shifts. IEEE Trans. on Inform. Theory,
2009.
[12] T. Weissman, E. Ordentlich, M. Weinberger, A. Somekh-Baruch, and N. Merhav. Universal
filtering via prediction. IEEE Trans. Inform. Theory, 53(4):1253?1264, 2007.
[13] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilistic language model.
JMLR, 3:1137?1155, 2003.
[14] Y. Nesterov. A method of solving a convex programming problem with convergence rate
o(1/sqr(k)). Soviet Mathematics Doklady, 27:372?376, 1983.
[15] Tieleman and G. Hinton. RMSProp: Divide the gradient by a running average of its recent
magnitude. In Lecture Note 6-5, University of Toronto, 2012.
[16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[17] Fr?d?ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud
Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features
and speed improvements. In NIPS Workshop on Deep Learning and Unsupervised Feature
Learning, 2012.
[18] A. Benitez-Paez, K. Portune, and Y. Sanz. Species level resolution of 16S rRNA gene amplicons
sequenced through MinION portable nanopore sequencer. bioRxiv:021758, 2015.
[19] M. Jain, I. Fiddes, K. Miga, H. Olsen, B. Paten, and M. Akeson. Improved data analysis for the
MinION nanopore sequencer. Nature Methods, 12:351?356, 2015.
[20] S. Goodwin, J. Gurtowski, S Ethe-Sayers, P. Deshpande, M. Schatz, and W.R. McCombie.
Oxford Nanopore sequencing, hybrid error correction, and de novo assembly of a eukaryotic
genome. Genome Res., 2015.
[21] L. Salmela and J. Schroder. Correcting errors in short reads by multiple alignments. BioInformatics, 27(11):1455?1461, 2011.
[22] C. Stein. Estimation of the mean of a multivariate normal distribution. The Annals of Statistics,
9(6):1135?1151, 1981.
9
| 6497 |@word faculty:1 version:1 evaluating:1 sgd:4 inpainting:1 substitution:3 selecting:1 tuned:1 outperforms:2 activation:2 enables:1 plot:1 drop:1 update:1 fund:1 v:1 half:1 device:1 core:1 short:1 pascanu:1 node:3 location:7 lx:1 toronto:1 firstly:1 simpler:1 bioinform:1 along:1 become:2 consists:1 inside:1 deteriorate:1 x0:5 expected:1 planning:1 multi:2 brain:1 bm3d:1 lena:2 inspired:1 resolve:1 window:19 considering:1 becomes:3 spain:1 estimating:1 underlying:5 moreover:3 notation:2 project:1 burger:1 what:1 minimizes:1 developed:1 finding:2 dubbed:3 guarantee:4 pseudo:3 thorough:2 collecting:2 exactly:1 doklady:1 wrong:1 tricky:1 stick:1 unit:1 before:2 aggregating:1 treat:2 io:1 encoding:1 oxford:2 solely:1 becoming:1 plus:1 therein:1 resembles:2 collect:1 suggests:1 statistically:1 practical:3 acknowledgment:1 lecun:1 harmeling:1 practice:3 razvan:1 sequencer:4 aji:1 universal:12 empirical:1 crossover:1 significantly:7 attain:3 thought:1 rnn:1 pre:1 bergeron:1 refers:1 performace:1 get:3 cannot:2 close:1 selection:1 context:21 applying:3 kmax:2 conventional:2 equivalent:1 maxz:1 fruitful:2 straightforward:2 go:1 independently:2 convex:4 focused:1 backend:1 resolution:2 simplicity:1 correcting:1 rule:9 estimator:1 utilizing:1 deriving:1 lamblin:1 notion:1 coordinate:2 analogous:1 annals:1 target:2 suppose:2 massive:2 cleaning:1 exact:1 programming:1 verd:3 goodfellow:1 element:4 expensive:2 particularly:1 recognition:2 continues:1 ark:1 ordentlich:5 observed:1 genome:2 yoon:3 connected:1 decrease:1 ran:1 mentioned:5 intuition:1 complexity:2 insertion:1 rmsprop:1 seung:1 nesterov:1 warde:1 hinder:1 trained:2 depend:1 solving:1 upon:3 completely:2 easily:1 exi:1 represented:1 various:1 soviet:1 alphabet:1 train:1 jain:2 fast:1 choosing:3 apparent:1 supplementary:1 valued:3 widely:2 larger:3 cvpr:1 ducharme:1 novo:1 statistic:2 gi:1 unseen:1 noisy:18 obviously:1 sequence:34 reconstruction:4 product:2 fr:1 translate:1 achieve:4 getting:1 sanz:1 convergence:1 double:1 optimum:4 requirement:1 regularity:1 diverges:1 extending:2 adam:2 produce:1 depending:1 develop:1 ac:2 recurrent:1 bers:2 measured:1 seroussi:4 progress:1 strong:3 implemented:1 met:1 differ:1 concentrate:2 direction:2 quantify:1 drawback:1 merged:1 filter:1 stochastic:6 nlm:2 material:1 require:1 dnns:1 generalization:1 alleviate:1 secondly:1 extension:1 correction:4 hold:3 recap:1 considered:1 ground:4 sufficiently:1 around:1 great:1 normal:1 mapping:1 viterbi:1 vary:1 adopt:1 optimizer:1 estimation:1 injecting:1 label:9 sensitive:2 successfully:3 tool:2 bsc:1 clearly:1 always:6 ck:6 rather:1 pn:5 varying:2 derived:3 focus:2 denoisers:3 properly:1 rank:1 improvement:7 sequencing:4 contrast:1 rigorous:2 baseline:1 sense:2 jauvin:1 entire:1 typically:1 hidden:4 dnn:13 corrupts:1 pixel:1 arg:9 classification:2 flexible:1 pascal:1 denoted:1 schroder:1 plan:2 art:4 softmax:2 initialize:1 equal:1 once:2 never:1 identical:2 represents:1 uncompressed:1 throughput:1 unsupervised:1 future:5 simplex:1 others:2 yoshua:1 quantitatively:1 randomly:1 national:2 resulted:1 cheaper:1 replaced:1 n1:4 minion:6 attempt:1 highly:1 alignment:1 characterstics:1 farley:1 uppercase:1 chain:1 accurate:2 tuple:1 necessary:1 korea:4 nucleotide:1 machinery:1 divide:1 re:1 biorxiv:1 dmc:5 theoretical:3 instance:1 column:2 modeling:1 cover:1 goodness:1 cost:5 addressing:2 vertex:1 too:1 scanning:1 corrupted:5 learnt:1 synthetic:4 lee:3 systematic:4 probabilistic:1 decoding:2 invertible:1 analogously:1 quickly:3 concrete:1 again:2 reflect:1 reconstructs:1 choose:3 henceforth:1 return:2 de:1 bergstra:1 wk:2 depends:1 view:1 try:1 break:1 observing:1 start:1 denoised:3 bouchard:1 defer:1 minimize:3 square:1 formed:1 marcelo:1 moon:4 convolutional:1 characteristic:3 sqr:1 yield:1 bayesian:1 vincent:1 accurately:1 substring:1 rectified:1 corruption:1 cation:1 converged:1 inform:4 suffers:1 sharing:1 definition:2 against:1 raster:1 deshpande:1 james:1 proof:2 hamming:2 gain:1 recall:1 subsection:1 illumina:2 improves:1 carefully:1 back:1 feed:2 attained:1 supervised:9 xie:1 improved:2 formulation:1 done:1 strongly:2 furthermore:5 hand:1 working:1 web:1 propagation:1 defines:1 aj:1 believe:2 grows:4 usage:1 verify:1 unbiased:5 true:9 normalized:1 hence:8 assigned:1 regularization:1 read:13 memoryless:1 symmetric:1 arnaud:1 deal:1 attractive:1 during:1 self:1 uniquely:1 transferable:1 anything:1 generalized:2 trying:1 stress:1 image:30 wise:1 novel:2 recently:2 exponentially:1 thirdly:1 interpretation:3 interpret:1 significant:3 refer:2 ai:1 resorting:1 mathematics:1 language:2 had:3 funded:1 etc:1 base:1 multivariate:1 showed:2 recent:2 barbara:2 taesup:1 binary:15 success:1 devise:3 seen:3 minimum:1 additional:2 ministry:1 converting:1 converge:1 signal:5 semi:1 sliding:8 full:1 multiple:2 desirable:1 characterized:1 cross:3 long:3 devised:2 weissman:5 a1:1 rrna:1 parenthesis:1 impact:2 prediction:3 regression:1 basic:1 essentially:1 noiseless:1 arxiv:1 iteration:1 adopting:1 sequenced:3 addition:1 whereas:2 grow:1 source:9 finiteness:1 median:1 schatz:1 rest:1 unlike:2 sure:1 induced:1 call:1 kera:3 bengio:3 identically:1 relu:2 zi:31 architecture:1 regarding:1 idea:2 shift:1 six:1 handled:1 speech:1 passing:1 cause:1 remark:2 deep:9 generally:2 useful:1 clear:1 amount:1 stein:1 dna:13 http:1 outperform:1 exist:2 deteriorates:1 estimated:10 dummy:1 discrete:18 hyperparameter:2 express:1 gadiel:1 achieving:1 changing:1 clean:20 utilize:1 backward:2 baruch:1 padded:1 downstream:3 sum:3 run:1 inverse:1 letter:3 package:1 compete:1 place:1 throughout:2 reasonable:1 reader:1 patch:5 utilizes:1 seq:1 matrix2:1 summarizes:2 ric:1 comparable:1 bit:5 dropout:1 layer:5 nonnegative:1 strength:1 scanned:1 speed:1 argument:2 min:6 concluding:1 performing:1 relatively:1 viswanathan:1 across:4 increasingly:1 snu:2 deferring:1 pr:1 theano:2 sided:1 computationally:2 resource:1 describing:1 count:1 mechanism:4 cameraman:2 know:2 end:1 available:1 incurring:1 apply:2 observe:3 einstein:5 appropriate:1 occurrence:1 robustly:1 alternative:2 robustness:2 batch:2 weinberger:4 original:4 running:2 assembly:1 maintaining:1 exploit:1 coral:2 objective:10 concentration:6 gradient:2 iclr:1 separate:1 simulated:1 collected:2 unstable:1 portable:1 reason:2 denoiser:27 erik:1 length:3 mini:2 minimizing:2 merhav:1 mink:2 ba:1 reliably:1 observation:2 markov:2 finite:5 descent:1 extended:2 hinton:2 varied:1 david:1 namely:12 pair:3 goodwin:1 connection:1 icip:1 learned:1 textual:2 deletion:1 barcelona:1 kingma:1 nip:4 trans:5 beyond:1 sparsity:2 program:1 max:3 including:1 hot:2 power:1 natural:2 hybrid:1 recursion:5 scheme:3 improve:1 brief:1 ignacio:1 lk:5 review:1 epoch:1 ict:1 python:1 relative:2 loss:20 expect:2 highlight:1 fully:1 lecture:1 limitation:5 filtering:1 foundation:1 incurred:2 sufficient:1 principle:1 systematically:1 share:2 pi:1 translation:1 row:1 lmax:2 supported:2 copy:1 enjoys:1 drastically:1 deeper:1 ber:25 template:1 taking:1 default:1 plain:1 giovanni:1 dimension:1 xn:6 transition:2 stand:5 gram:1 fb:5 depth:3 adopts:1 made:2 forward:4 universally:1 correlate:1 reconstructed:1 obtains:1 olsen:1 gene:2 sequentially:1 assumed:2 xi:11 grayscale:5 subsequence:1 continuous:2 sk:20 why:2 table:2 promising:1 schuler:1 nature:2 channel:7 learn:4 robust:4 nicolas:1 obtaining:5 somekh:1 excellent:1 domain:1 eukaryotic:1 did:1 main:2 motivation:1 noise:6 arise:1 denoise:2 xu:1 representative:1 fails:1 momentum:1 surveyed:1 governed:1 jmlr:1 third:1 ian:1 rk:2 remained:1 theorem:1 bastien:1 symbol:19 experimented:1 essential:1 workshop:1 kr:2 ci:28 magnitude:1 chen:1 entropy:3 simply:4 ramirez:1 failed:1 contained:1 applies:2 corresponds:2 truth:4 tieleman:1 conditional:1 goal:2 shared:3 experimentally:2 justify:1 denoising:57 lemma:7 called:1 specie:5 pas:3 ece:1 experimental:3 shannon:3 est:1 seoul:2 bioinformatics:1 phenomenon:2 evaluate:1 tested:2 lnew:4 ex:4 |
6,077 | 6,498 | Online and Differentially-Private Tensor Decomposition
Animashree Anandkumar
Department of EECS
University of California, Irvine
a.anandkumar@uci.edu
Yining Wang
Machine Learning Department
Carnegie Mellon University
yiningwa@cs.cmu.edu
Abstract
Tensor decomposition is an important tool for big data analysis. In this paper,
we resolve many of the key algorithmic questions regarding robustness, memory
efficiency, and differential privacy of tensor decomposition. We propose simple
variants of the tensor power method which enjoy these strong properties. We present
the first guarantees for online tensor power method which has a linear memory
requirement. Moreover, we present a noise calibrated tensor power method with
efficient privacy guarantees. At the heart of all these guarantees lies a careful
perturbation analysis derived in this paper which improves up on the existing
results significantly.
Keywords: Tensor decomposition, tensor power method, online methods, streaming, differential privacy, perturbation analysis.
1
Introduction
In recent years, tensor decomposition has emerged as a powerful tool to solve many challenging
problems in unsupervised [1], supervised [18] and reinforcement learning [4]. Tensors are higher
order extensions of matrices which can reveal far greater information compared to matrices, while
retaining most of the efficiencies of matrix operations [1].
A central task in tensor analysis is the process of decomposing the tensor into its rank-1 components,
which is usually referred to as CP (Candecomp/Parafac) decomposition in the literature. While
decomposition of arbitrary tensors is NP-hard [13], it becomes tractable for the class of tensors
with linearly independent components. Through a simple whitening procedure, such tensors can
be converted to orthogonally decomposable tensors. Tensor power method is a popular method for
computing the decomposition of an orthogonal tensor. It is simple and efficient to implement, and a
natural extension of the matrix power method.
In the absence of noise, the tensor power method correctly recovers the components under a random
initialization followed by deflation. On the other hand, perturbation analysis of tensor power method is
much more delicate compared to the matrix case. This is because the problem of tensor decomposition
is NP-hard, and if a large amount of arbitrary noise is added to an orthogonal tensor, the decomposition
can again become intractable. In [1], guaranteed recovery of components was proven under bounded
noise, and the bound was improved in [2]. In this paper, we significantly improve upon the noise
requirements, i.e. the extent of noise that can be withstood by the tensor power method.
In order for tensor methods to be deployed in large-scale systems, we require fast, parallelizable
and scalable algorithms. To achieve this, we need to avoid the exponential increase in computation
and memory requirements with the order of the tensor; i.e. a naive implementation on a 3rd-order
d-dimensional tensor would require O(d3 ) computation and memory. Instead, we analyze the online
tensor power method that requires only linear (in d) memory and does not form the entire tensor. This
is achieved in settings, where the tensor is an empirical higher order moment, computed from the
stream of data samples. We can avoid explicit construction of the tensor by running online tensor
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
power method directly on i.i.d. data samples. We show that this algorithm correctly recovers tensor
2
?
?
components in time1 O(nk
d) and O(dk)
memory for a rank-k tensor and n number of data samples.
Additionally, we provide efficient sample complexity analysis.
As spectral methods become increasingly popular with recommendation system and health analytics
applications [29, 17], data privacy is particularly relevant in the context of preserving sensitive private
information. Differential privacy could still be useful even if data privacy is not the prime concern
[30]. We propose the first differentially private tensor decomposition algorithm with both privacy and
utility guarantees via noise calibrated power iterations. We show that under the natural assumption of
tensor incoherence, privacy parameters have no (polynomial) dependence on tensor dimension d. On
the other hand, straightforward input perturbation type methods lead to far worse bounds and do not
yield guaranteed recovery for all values of privacy parameters.
1.1
Related work
Online tensor SGD Stochastic gradient descent (SGD) is an intuitive approach for online tensor
decomposition and has been successful in practical large-scale tensor decomposition problems [16].
Despite its simplicity, theoretical properties are particularly hard to establish. [11] considered a
variant of the SGD objective and proved its correctness. However, the approach in [11] only works
for even-order tensors and its sample complexity dependency upon tensor dimension d is poor.
Tensor PCA In the statistical tensor PCA [24] model a d?d?d tensor T = v ?3 +E is observed and
one wishes to recover component v under the presence of Gaussian random noise E. [24] shows that
kEkop = O(d?1/2 ) is sufficient to guarantee approximate recovery of v and [14] further improves
the noise condition to kEkop = O(d?1/4 ) via a 4th-order sum-of-squares relaxation. Techniques in
both [24, 14] are rather complicated and could be difficult to adapt to memory or privacy constraints.
Furthermore, in [24, 14] only one component is considered. On the other hand, [25] shows that
kEkop = O(d?1/2 ) is sufficient for recovering multiple components from noisy tensors. However,
[25] assumes exact computation of rank-1 tensor approximation, which is NP-hard in general.
Noisy matrix power methods Our relaxed noise condition analysis for tensor power method is
inspired by recent analysis of noisy matrix power methods [12, 6]. Unlike the matrix case, tensor
decomposition no longer requires spectral gap among eigenvalues and eigenvectors are usually
recovered one at a time [1, 2]. This poses new challenges and requires non-trivial extensions of
matrix power method analysis to the tensor case.
1.2
Notation and Preliminaries
We use [n] to denote the set {1, 2, ? ? ? , n}. We use bold characters A, T, v for matrices, tensors,
vectors and normal characters ?, ? for scalars. A pth order tensor T of dimensions d1 , ? ? ? , dp has
d1 ? ? ? ? ? dp elements, each indexed by a p-tuple (i1 , ? ? ? , ip ) ? [d1 ] ? ? ? ? ? [dp ]. A tensor T of
dimensions d ? ? ? ? ? d is super-symmetric or simply symmetric if Ti1 ,??? ,ip = T?(i1 ),??? ,?(ip ) for all
permutations ? : [p] ? [p]. For a tensor T ? Rd1 ?????dp and matrices A1 ? Rm1 ?d1 , ? ? ? , Ap ?
Rmp ?dp , the multi-linear form T(A1 , ? ? ? , Ap ) is a m1 ? ? ? ? ? mp tensor defined as
X
X
[T(A1 , ? ? ? , Ap )]i1 ,??? ,ip =
???
Tj1 ,??? ,jp [A1 ]j1 ,i1 ? ? ? [Ap ]jp ,ip .
pP
j1 ?[d1 ]
jp ?[dp ]
We use kvk2 =
for vector 2-norm and kvk? = maxi |v i | for vector infinity norm.
We use kTkop to denote the operator norm or spectral norm of a tensor T, which is defined as
kTkop = supku1 k2 =???kup k2 =1 T(u1 , ? ? ? , up ). An event A is said to occur with overwhelming
probability if Pr[A] ? 1 ? d?10 .
2
i vi
We limit ourselves to symmetric 3rd-order tensors (p = 3) in this paper. The results can be directly
extended to asymmetric tensors since they can first be symmetrized using simple matrix operations
(see [1]). Extension to higher-order tensors is also straightforward. A symmetric 3rd-order tensor T
is rank-1 if it can be written in the form of
T = ? ? v ? v ? v = ?v ?3
1
??
? hides poly-logarithmic factors.
O
2
Ti,j,` = ? ? v(i) ? v(j) ? v(`),
(1)
Algorithm 1 Robust tensor power method [1]
e number of components k ? d, number of iterations L, R.
1: Input: symmetric d ? d ? d tensor T,
2: for i = 1 to k do
3:
Initialization: Draw u0 uniformly at random from the unit sphere in Rd .
e ut?1 , ut?1 )/kT(I,
e ut?1 , ut?1 )k2 for t = 1, ? ? ? , R.
4:
Power iteration: Compute ut = T(I,
(1)
(L)
5:
Boosting: Repeat Steps 3 and 4 for L times and obtain uR , ? ? ? , uR . Let ? ? =
(? )
? i = T(u
e (? ) (? ) (? )
e (? ) , u(? ) , u(? ) ).
? i = uR and ?
argmaxL
? =1 T(uR , uR , uR ). Set v
R
R
R
?3
?iv
e ?T
e ??
?i .
6:
Deflation: T
7: end for
?i, v
? i }ki=1 .
8: Output: Estimated eigenvalue/Eigenvector pairs {?
where ? represents the outer product, and v ? Rd is a unit vector (i.e., kvk2 = 1) and ? ? R+ . 2 A
tensor T ? Rd?d?d is said to have a CP (Candecomp/Parafac) rank k if it can be (minimally) written
as the sum of k rank-1 tensors:
X
T=
?i v i ? v i ? v i , ?i ? R+ , v i ? Rd .
(2)
i?[k]
A tensor is said to be orthogonally decomposable if in the above decomposition hv i , v j i = 0 for
i 6= j. Any tensor can be converted to an orthogonal tensor through an invertible whitening transform,
provided that v 1 , v 2 , . . . , v k are linearly independent [1]. We thus limit our analysis to orthogonal
tensors in this paper since it can be extended to this more general class in a straightforward manner.
Tensor Power Method: A popular algorithm for finding the tensor decomposition in (2) is through
the tensor power method. The full algorithm is given in Algorithm 1. We first provide an improved
noise analysis for the robust power method, improving error tolerance bounds previously established
in [1]. We next propose memory-efficient and/or differentially private variants of the robust power
method and give performance guarantee based on our improved noise analysis.
2
Improved Noise Analysis for Tensor Power Method
When the tensor T has an exact orthogonal decomposition, the power method provably recovers all
the components with random initialization and deflation. However, the analysis is more subtle under
noise. While matrix perturbation bounds are well understood, it is an open problem in the case of
tensors. This is because the problem of tensor decomposition is NP-hard, and becomes tractable
only under special conditions such as orthogonality (and more generally linear independence). If
a large amount of arbitrary noise is added, the decomposition can again become intractable. In [1],
guaranteed recovery of components was proven under bounded noise and we recap the result below.
e = T+?T , where T = Pk ?i v ?3
Theorem 2.1 ([1] Theorem 5.1, simplified version). Suppose T
i=1
i
with ?i > 0 and orthonormal basis vectors{v 1 , ? ? ? , v k } ? Rd , d ? k, and noise ?T satisfies
?i, v
? i }ki=1 be
k?T kop ? . Let ?max , ?min be the largest and smallest values in {?i }ki=1 and {?
outputs of Algorithm 1. There exist absolute constants K0 , C1 , C2 , C3 > 0 such that if
? C1 ??min /d, R = ?(log d+log log(?max /)), L = ?(max{K0 , k} log(max{K0 , k})), (3)
then with probability at least 0.9, there exists a permutation ? : [k] ? [k] such that
? ?(i) | ? C2 ,
|?i ? ?
? ?(i) k2 ? C3 /?i ,
kv i ? v
?i = 1, ? ? ? , k.
Theorem 2.1 is the first provably correct result on robust tensor decomposition under general noise
conditions. In particular, the noise term ?T can be deterministic or even adversarial. However, one
important drawback of Theorem 2.1 is that k?T kop must be upper bounded by O(?min /d), which
is a strong assumption for many practical applications [28]. On the other hand, [2, 24] show ?
that
by using smart initializations the robust tensor power method is capable of tolerating O(?min / d)
2
One can always assume without loss of generality that ? ? 0 by replacing v with ?v instead.
3
magnitude of noise, and [25] suggests that such noise magnitude cannot be improved if deflation (i.e.,
successive rank-one approximation) is to be performed.
?
In this paper, we show that the relaxed noise bound O(?min / d) holds even if the initialization of
robust TPM is as simple as a vector uniformly sampled from the d-dimensional sphere (Algorithm 1).
Our claim is formalized below:
Theorem 2.2 (Improved noise tolerance analysis for robust TPM). Assume the same notation as
in Theorem 2.1. Let ? (0, 1/2) be an error tolerance parameter. There exist absolute constants
K0 , C0 , C1 , C2 , C3 > 0 such that if ?T satisfies
?
(? )
(? )
(? )
(? )
k?T (I, ut , ut )k2 ? , |?T (v i , ut , ut )| ? min{/ k, C0 ?min /d}
(4)
for all i ? [k], t ? [T ], ? ? [L] and furthermore
?
? C1 ? ?min / k, R = ?(log(?max d/)),
L = ?(max{K0 , k} log(max{K0 , k})),
(5)
then with probability at least 0.9, there exists a permutation ? : [k] ? [k] such that
? ?(i) | ? C2 ,
|?i ? ?
? ?(i) k2 ? C3 /?i ,
kv i ? v
?i = 1, ? ? ? , k.
Due to space constraints, proof of Theorem 2.2 is placed in Appendix C. We next make several
remarks on our results. In particular, we consider three scenarios with increasing assumptions
imposed on the noise tensor ?T and compare the noise conditions in Theorem 2.2 with existing
results on orthogonal tensor decomposition:
1. ?T does not have any special structure: in this case, we only have |?T (v i , ut , ut )| ?
k?T kop and our noise conditions reduces to the classical one: k?T kop = O(?min /d).
?
2. ?T is ?round? in the sense that |?T (v i , ut , ut )| ? O(1/ d) ? k?T (I, ut , ut )k2 : this is
the typical setting when the noise ?T follows Gaussian or sub-Gaussian distributions,
? as
we explain in Sec. 3 and 4. Our noise condition in this case is k?T kop = O(?min / d),
strictly improving Theorem 2.1 on robust tensor power method with random initializations
and matching the bound for more advanced SVD initialization techniques in [2].
3. ?T is weakly correlated with signal in the sense that k?T (v i , I, I)k2 = O(?
? min /d) for
all i ? k: in this case our noise condition reduces to k?T kop = O(?min / k), strictly
improving over SVD initialization [2] in the ?undercomplete? regime k = o(d). Note that
the whitening trick [3, 1] does not attain our bound, as we explain in Appendix B.
Finally, we remark that the log log(1/) quadratic convergence rate in Eq. (3) is worsened to log(1/)
linear rate in Eq. (5). We are not sure whether this is an artifact of our analysis, because similar
analysis for the matrix noisy power method [12] also reveals a linear convergence rate.
Implications Our bounds in Theorem 2.2 results in sharper analysis of both memory-efficient and
differentially private power methods which we propose in Sec. 3, 4. Using the original analysis
(Theorem 2.1) for the two applications, the memory-efficient tensor power method would have sample
complexity cubic in the dimension
d and for differentially private tensor decomposition the privacy
?
? d) as d increases, which is particularly bad as the quality of privacy
level ? needs to scale as ?(
protection e? degrades exponentially with tensor dimension d. On the other hand, our improved noise
condition in Theorem 2.2 greatly sharpens the bounds in both applications: for memory efficient
decomposition, we now require only quadratic sample complexity and for differentially private
decomposition, the privacy level ? has no polynomial dependence on d. This makes our results far
more practical for high-dimensional tensor decomposition applications.
Numerical verification of noise conditions and comparison with whitening techniques We verify our improved noise conditions for robust tensor power method on simulation tensor data. In
particular, we consider three noise models and demonstrate varied asymptotic noise magnitudes at
which tensor power method succeeds. The simulation results nicely match our theoretical findings
and also suggest, in an empirical way, tightness of noise bounds in Theorem 2.2. Due to space
constraints, simulation results are placed in Appendix A.
4
We also compare our improved noise bound with those obtained by whitening, a popular technique that
reduces tensor decomposition to matrix decomposition problems [1, 21, 28]. We show in Appendix
B that, without side information the standard analysis of whitening based tensor decomposition leads
to worse noise tolerance bounds than what we obtained in Theorem 2.2.
3
Memory-Efficient Streaming Tensor Decomposition
Tensor power method in Algorithm 1 requires significant storage to be deployed: ?(d3 ) memory
is required to store a dense d ? d ? d tensor, which is prohibitively large in many real-world
applications as tensor dimension d could be really high. We show in this section how to compute
tensor decomposition in a memory efficient manner, with storage scaling linearly in d. In particular,
we consider the case when tensor T to be decomposed is a population moment Ex?D [x?3 ] with
respect to some unknown underlying data distribution D, and data points x1 , x2 , ? ? ? i.i.d. sampled
from D are fed into a tensor decomposition algorithm in a streaming fashion. One classical example is
topic modeling, where each xi represents documents that come in streams and consistent estimation
of topics can be achieved by decomposing variants of the population moment [1, 3].
Algorithm 2 displays memory-efficient tensor decomposition procedure on streaming data points. The
main idea is to replace the power iteration step T(I, u, u) in Algorithm 1 with a ?data association?
step that exploits the empirical-moment structure of the tensor T to be decomposed and evaluates
approximate power iterations from stochastic data samples. This procedure is highly efficient, in that
both time and space complexity scale linearly with tensor dimension d:
Proposition 3.1. Algorithm 2 runs in O(nkdLR) time and O(d(k + L)) memory, with O(nkR)
sample complexity (number of data point gone through).
In the remainder of this section we show Algorithm 2 recovers eigenvectors of the population moment
Ex?D [x?3 ] with high probability and we derive corresponding sample complexity bounds. To
facilitate our theoretical analysis we need several assumptions on the data distribution D. The first
natural assumption is the low-rankness of the population moment Ex?D [x?3 ] to be decomposed:
Assumption 3.1 (Low-rank moment). The mean tensor T = Ex?D [x?3 ] admits a low-rank reprePk
d
sentation T = i=1 ?i v ?3
i for ?1 , ? ? ? , ?k > 0 and orthonormal {v 1 , ? ? ? , v k } ? R .
We also place restrictions on the ?noise model?, which imply that the population moment Ex?D [x?3 ]
can be well approximated by a reasonable number of samples with high probability. In particular, we
consider sub-Gaussian noise as formulated in Definition 3.1 and Assumption 3.2:
Definition 3.1 (Multivariate sub-Gaussian distribution, [15]). A D-dimensional random variable x
belongs
to the sub-Gaussian
distribution
family SG D (?) with parameter ? > 0 if it has zero mean
and E exp(a> x) ? exp kak22 ? 2 /2 for all a ? RD .
Assumption 3.2 (Sub-Gaussian noise). There exists ? > 0 such that the mean-centered vectorized
random variable vec(x?3 ? E[x?3 ]) belongs to SG d3 (?) as defined in Definition 3.1.
We remark that Assumption 3.2 includes a wide family of distributions that are of practical importance,
for example noise that have compact support. Assumption 3.2 also resembles (B, p)-round noise
considered in [12] that imposes spherical symmetry constraints onto the noise distribution.
We are now ready to present the main theorem that bounds the recovery (approximation) error of
eigenvalues and eigenvectors of the streaming robust tensor power method in Algorithm 2:
Theorem 3.1 (Analysis of streaming
robust tensor power method). Let Assumptions 3.1, 3.2 hold
?
true and suppose < C1 ?min / k for some sufficiently small absolute constant C1 > 0. If
2
2 2
e min ? d , ? d
n=?
, R = ?(log(?max d/)), L = ?(k log k),
2 ?2min
then with probability at least 0.9 there exists permutation ? : [k] ? [k] such that
? ?(i) | ? C2 , kv i ? v
? ?(i) k2 ? C3 /?i , ?i = 1, ? ? ? , k
|?i ? ?
for some universal constants C2 , C3 > 0.
Corollary 3.1 is then an immediate consequence of Theorem 3.1, which simplifies the bounds and
highlights asymptotic dependencies over important model parameters d, k and ?:
5
Algorithm 2 Online robust tensor power method
1: Input: data stream x1 , x2 , ? ? ? ? Rd , no. of components k, parameters L, R, n.
2: for i = 1 to k do
(1)
(L)
3:
Draw u0 , ? ? ? , u0 i.i.d. uniformly at random from the unit sphere S d?1 .
4:
for t = 0 to R ? 1 do
(1)
(L)
? (1) , ? ? ? , ?
? (L) to 0.
? t+1 , ? ? ? , u
? t+1 and ?
5:
Initialization: Set accumulators u
6:
7:
(? )
(? )
(? )
2
? t+1 ? u
? t+1 + n1 (x>
Data association: Read the next n data points; update u
` ut ) xi
(?
)
1
(?
)
(?
)
>
3
? ??
? + (x u ) for each ` ? [n] and ? ? [L].
and ?
` t
n
P
(? )
(? )
? 2 ?j
? t+1 ? u
? t+1 ? i?1
Deflation: For each ? ? [L], update u
j=1 ?j ?j,? v
P
(? )
? (? ) ? ?
? (? ) ? i?1 ?
? j ? 3 , where ?j,? = v
?>u
? .
and ?
j
j,?
j=1
(? )
(? )
t
(? )
? t+1 /k?
8:
Normalization: ut+1 = u
ut+1 k2 , for each ? ? [L].
9:
end for
(? ? )
? (? ) and store ?
?i = ?
? (? ? ) , v
? i = uR .
10:
Find ? ? = argmax? ?[L] ?
11: end for
?i, v
? x?D [x?3 ].
? i }ki=1 of E
12: Output: approximate eigenvalue and eigenvector pairs {?
?
Corollary 3.1. Under Assumptions 3.1, 3.2, Algorithm 2 correctly learns {?i , v i }ki=1 up to O(1/ d)
? 2 kd2 ) samples and O(dk)
?
additive error with O(?
memory.
Proofs of Theorem 3.1 and Corollary 3.1 are both deferred to Appendix D. Compared to streaming
noisy matrix PCA considered in [12], the bound is weaker with an additional 1/k factor in the term
involving and 1/d factor in the term that does not involve . We conjecture this to be a fundamental
difficulty of the tensor decomposition problem. On the other hand, our bounds resulting from the
analysis in Sec. 2 have a O(1/d) improvement compared to applying existing analysis in [1] directly.
Remark on comparison with SGD: Our proposed streaming tensor power method is nothing but
the projected stochastic gradient descent (SGD) procedure on the objective of maximizing the tensor
norm on the sphere. The optimal solution of this coincides with the objective of finding the best
rank-1 approximation of the tensor. Here, we can estimate all the components of the tensor through
deflation. An alternative method is to run SGD based a combined objective function to obtain all the
components of the tensor simultaneously, as considered in [16, 11]. However, the analysis in [11]
only works for even-order tensors and has worse dependency (at least d9 ) on tensor dimension d.
4
Differentially private tensor decomposition
The objective of private data processing is to release data summaries such that any particular entry of
the original data cannot be reliably inferred from the released results. Formally speaking, we adopt
the popular (?, ?)-differential privacy criterion proposed in [9]:
Definition 4.1 ((?, ?)-differential privacy [9]). Let M denote all symmetric d-dimensional real
third order tensors and O be an arbitrary output set. A randomized algorithm A : M ? O is
(?, ?)-differentially private if for all neighboring tensors T, T0 and measurable set O ? O we have
Pr [A(T) ? O] ? e? Pr [A(T0 ) ? O] + ?,
where ? > 0, ? ? [0, 1) are privacy parameters and probabilities are taken over randomness in A.
Since our tensor decomposition analysis concerns symmetric tensors primarily, we adopt a ?symmetric? definition of neighboring tensors in Definition 4.1, as shown below:
Definition 4.2 (Neighboring tensors). Two d?d?d symmetric tensors T, T0 are neighboring tensors
if there exists i, j, k ? [d] such that
T0 ?T = ?symmetrize(ei ?ej ?ek ) = ? (ei ? ej ? ek + ei ? ek ? ej + ? ? ? + ek ? ej ? ei ) .
As noted earlier, the above notions can be similarly extended to asymmetric tensors as well as the
guarantees for tensor power method on asymmetric tensors. We also remark that the difference of
6
Algorithm 3 Differentially private robust tensor power method
1: Input: tensor T, no. of components
k, number of iterations L, R, privacy parameters ?, ?.
?
2: Initialization: D = 0, ? =
6
2 ln(1.25/? 0 )
,
?0
?0 =
?
2K ,
?0 = ?
?
,
K(4+ln(2/?))
K = kL(R + 1).
3: for i = 1 to k do
(1)
(? )
4:
Initialization: Draw u0 , ? ? ? , u0 uniformly at random from the unit sphere in Rd .
5:
for t = 0 to R ? 1 do
(? )
(? )
(? )
? t+1 = (T ? D)(I, ut , ut ).
6:
Power iteration: compute u
7:
8:
9:
10:
11:
12:
13:
(? )
(? )
(? )
(? )
(? ) i.i.d.
? t+1 = u
? t+1 + ?kut k2? ? z t , where z t
? N (0, Id ).
Noise calibration: release u
(? )
(? )
(? )
? t+1 /k?
Normalization: ut+1 = u
ut+1 k2 .
end for
? (? ) = (T ? D)(u(? ) , u(? ) , u(? ) ) + ?ku(? ) k3 ? z? and let ? ? = argmax ?
? (? ) .
Compute ?
?
R ? R
R
R ?
?
(?
)
?i = ?
? (? ) , v
?iv
? i = uR , D ? D + ?
? ?3
Deflation: ?
i .
end for
?i, v
? i }ki=1 .
Output: eigenvalue/eigenvector pairs {?
?neighboring tensors? as defined above has Frobenious norm bounded by O(1). This is necessary
because an arbitrary perturbation of a tensor, even if restricted to only one entry, is capable of
destroying any utility guarantee possible.
In a nutshell, Definitions 4.1, 4.2 state that an algorithm A is differentially private if, conditioned
on any set of possible outputs of A, one cannot distinguish with high probability between two
?neighboring? tensors T, T0 that differ only in a single entry (up to symmetrization), thus protecting
the privacy of any particular element in the original tensor T. Here ?, ? are parameters controlling
the level of privacy, with smaller ?, ? values implying stronger privacy guarantee as Pr[A(T) ? O]
and Pr[A(T0 ) ? O] are closer to each other.
Algorithm 3 describes the procedure of privately releasing eigenvectors of a low-rank input tensor T.
The main idea for privacy preservation is the following noise calibration step
? t+1 = u
? t+1 + ?kut k2? ? z t ,
u
where z t is a d-dimensional standard Normal random variable and ?kut k2? is a carefully designed
noise magnitude in order to achieved desired privacy level (?, ?). One key aspect is that the noise
calibration step occurs at every power iteration, which adds to the robustness of the algorithm and
achieves sharper bounds. We discuss at the end of this section.
Theorem 4.1 (Privacy guarantee). Algorithm 3 satisfies (?, ?)-differential privacy.
Proof. The only power iteration step of Algorithm 3 can be thought of as K = kL(R + 1) queries
directed to a private data sanitizer which produces f1 (T; u) = T(I, u, u) or f2 (T; u) = T(u, u, u)
each time. The `2 -sensitivity of both queries can be separately bounded as
?2 f1 = sup kT(I, u, u) ? T0 (I, u, u)k2 ? sup 2(|ui uj | + |ui uk | + |uj uk |) ? 6kuk2? ;
T0
i,j,k
0
?2 f2 = sup T(u, u, u) ? T (u, u, u) = sup 6ui uj uk ? 6kuk3? ,
T0
i,j,k
where T = T + symmetrize(ei ? ej ? ek ) is some neighboring tensor of T. Thus, applying the
Gaussian mechanism [9] we can (?, ?)-privately release one output of either f1 (u) or f2 (u) by
p
?2 f` ? 2 ln(1.25/?)
f` (u) +
? w,
?
where ` = 1, 2 and w ? N (0, I) are i.i.d. standard Normal random variables. Finally, applying
advanced composition [9] across all K = kL(R + 1) private releases we complete the proof of this
proposition. Note that both normalization and deflation steps do not affect the differential privacy of
Algorithm 3 due to the closeness under post-processing property of DP.
0
The rest of the section is devoted to discussing the ?utility? of Algorithm 3; i.e., to show that the
algorithm is still capable of producing approximate eigenvectors, despite the privacy constraints.
Similar to [12], we adopt the following incoherence assumptions on the eigenspace of T:
7
Assumption 4.1 (Incoherent basis). Suppose V ? Rd?k is the stacked matrix of orthonormal
component vectors {v i }ki=1 . There exists constant ?0 > 0 such that
d
max kV> ei k22 ? ?0 .
(6)
k 1?i?d
Note that by definition, ?0 is always in the range of [1, d/k]. Intuitively, Assumption 4.1 with small
constant ?0 implies a relatively ?flat? distribution of element magnitudes in T. The incoherence level
?0 plays an important role in the utility guarantee of Algorithm 3, as we show below:
Theorem 4.2 (Guaranteed recovery of eigenvector under privacy requirements). Suppose T =
Pk
?3
d
i=1 ?i v i for ?1 > ?2 ? ?3 ? ? ? ? ? ?k > 0 with
? orthonormal v 1 , ? ? ? , v k ? R , and suppose
Assumption 4.1 holds with ?0 . Assume ?1 ? ?2 ? c/ d for some sufficiently small universal constant
c > 0. If R = ?(log(?max d)), L = ?(k log k) and ?, ? satisfy
?0 k 2 log(?max d/?)
?=?
,
(7)
?min
?1, v
? 1 ) returned by Algorithm 3 satisfies
then with probability at least 0.9 the first eigen pair (?
?
?
?1 ? ??1 = O(1/ d),
? 1 k2 = O(1/(?1 d)).
kv 1 ? v
At a high level, Theorem 4.2 states that when the privacy parameter ? is not too small (i.e., privacy
requirements are not too stringent), Algorithm 3 approximately recovers the largest eigenvalue and
eigenvector with high probability. Furthermore, when ?0 is a constant, the lower bound condition on
the privacy parameter ? does not depend polynomially upon tensor dimension d, which is a much
desired property for high-dimensional data analysis. On the other hand, similar results cannot be
achieved via simpler methods like input perturbation, as we discuss below:
Comparison with input perturbation Input perturbation is perhaps the simplest method for differentially private data analysis and has been successful in numerous scenarios, e.g. private matrix
PCA [10]. In our context, this would entail appending a random Gaussian tensor E directly onto the
input tensor T before tensor power iterations.
p By Gaussian mechanism, the standard deviation ? of
each element in E scales as ? = ?(??1 log(1/?)). On the other hand, noise analysis for tensor
decomposition derived
in [24, 2] and in the subsequent section of this paper requires ? = O(1/d) or
?
?
kEkop = O(1/ d), which implies ? = ?(d)
(cf. Lemma F.9). That is, the privacy parameter ? must
scale linearly with tensor dimension d to successfully recover even the first principle eigenvector,
which renders the privacy guarantee of the input perturbation procedure useless for high-dimensional
tensors. Thus, we require a non-trivial new approach for differentially private tensor decomposition.
Finally, we remark that a more desired utility analysis would bound the approximation error kv i ??
v i k2
for every component v 1 , ? ? ? , v k , and not just the top eigenvector. Unfortunately, our current analysis
? i ? v i may not be incoherent. Extension to
cannot handle deflation effectively as the deflated vector v
deflated tensor decomposition remains an interesting open question.
5
Conclusion
We consider memory-efficient and differentially private tensor decomposition problems in this paper
and derive efficient algorithms for both online and private tensor decomposition based on the popular
tensor power method framework. Through an improved noise condition analysis of robust tensor
power method, we obtain sharper dimension-dependent sample complexity bounds for online tensor
decomposition and wider range of privacy parameters values for private tensor decomposition while
still retaining utility. Simulation results verify the tightness of our noise conditions in principle.
One important direction of future research is to extend our online and/or private tensor decomposition
algorithms and analysis to practical applications such as topic modeling and community detection,
where tensor decomposition acts as one critical step for data analysis. An end-to-end analysis of
online/private methods for these applications would be theoretically interesting and could also greatly
benefit practical machine learning of important models.
Acknowledgement A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF
Career award CCF-1254106, ONR Award N00014- 14-1-0665, ARO YIP Award W911NF-13-1-0084
and AFOSR YIP FA9550-15-1-0221.
8
References
[1] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning
latent variable models. Journal of Machine Learning Research, 15(1):2773?2832, 2014.
[2] A. Anandkumar, R. Ge, and M. Janzamin. Learning overcomplete latent variable models through tensor
methods. In Proc. of COLT, 2015.
[3] A. Anandkumar, Y.-k. Liu, D. J. Hsu, D. P. Foster, and S. M. Kakade. A spectral algorithm for latent
dirichlet allocation. In NIPS, 2012.
[4] K. Azizzadenesheli, A. Lazaric, and A. Anandkumar. Reinforcement learning of POMDP?s using spectral
methods. In COLT, 2016.
[5] B. W. Bader and T. G. Kolda. Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM
Transactions on Mathematical Software, 32(4):635?653, 2006.
[6] M.-F. Balcan, S. Du, Y. Wang, and A. W. Yu. An improved gap-dependency analysis of the noisy power
method. In COLT, 2016.
[7] L. Birg?. An alternative point of view on lepski?s method. Lecture Notes-Monograph Series, pages
113?133, 2001.
[8] B. Cirel?soN, I. Ibragimov, and V. Sudakov. Norms of gaussian sample functions. Lecture Notes in
Mathematics, 550:20?41, 1976.
[9] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in
Theoretical Computer Science, 9(3-4):211?407, 2014.
[10] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze gauss: optimal bounds for privacy-preserving
principal component analysis. In STOC, 2014.
[11] R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points?online stochastic gradient for tensor
decomposition. In COLT, 2015.
[12] M. Hardt and E. Price. The noisy power method: A meta algorithm with applications. In NIPS, 2014.
[13] C. J. Hillar and L.-H. Lim. Most tensor problems are np-hard. Journal of the ACM (JACM), 60(6):45,
2013.
[14] S. B. Hopkins, J. Shi, and D. Steurer. Tensor principal component analysis via sum-of-squares proofs. In
COLT, 2015.
[15] D. Hsu, S. M. Kakade, and T. Zhang. A tail inequality for quadratic forms of subgaussian random vectors.
Electron. Commun. Probab, 17(52):1?6, 2012.
[16] F. Huang, U. Niranjan, M. U. Hakeem, and A. Anandkumar. Online tensor methods for learning latent
variable models. Journal of Machine Learning Research, 16:2797?2835, 2015.
[17] F. Huang, I. Perros, R. Chen, J. Sun, A. Anandkumar, et al. Scalable latent tree model and its application
to health analytics. arXiv preprint arXiv:1406.4566, 2014.
[18] M. Janzamin, H. Sedghi, and A. Anandkumar. Beating the perils of non-convexity: Guaranteed training of
neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
[19] G. Kamath. Bounds on the expectation of the maximum of samples from a gaussian. [Online; accessed
April, 2016].
[20] T. G. Kolda and J. R. Mayo. Shifted power method for computing tensor eigenpairs. SIAM Journal on
Matrix Analysis and Applications, 32(4):1095?1124, 2011.
[21] V. Kuleshov, A. T. Chaganty, and P. Liang. Tensor factorization via matrix factorization. In AISTATS, 2015.
[22] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. Annals of
Statistics, pages 1302?1338, 2000.
[23] P. Massart. Concentration inequalities and model selection, volume 6. Springer, 2007.
[24] A. Montanari and E. Richard. A statistical model for tensor PCA. In NIPS, 2014.
[25] C. Mu, D. Hsu, and D. Goldfarb. Successive rank-one approximations for nearly orthogonally decomposable symmetric tensors. SIAM Journal on Matrix Analysis and Applications, 36(4):1638?1659, 2015.
[26] G. W. Stewart, J.-g. Sun, and H. B. Jovanovich. Matrix perturbation theory. Academic press New York,
1990.
[27] R. Tomioka and T. Suzuki. Spectral norm of random tensors. arXiv:1407.1870, 2014.
[28] Y. Wang, H.-Y. Tung, A. J. Smola, and A. Anandkumar. Fast and guaranteed tensor decomposition via
sketching. In NIPS, 2015.
[29] Y. Wang and J. Zhu. Spectral methods for supervised topic models. In NIPS, 2014.
[30] R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In ICML, 2013.
9
| 6498 |@word private:22 faculty:1 version:1 polynomial:2 norm:8 sharpens:1 stronger:1 c0:2 open:2 simulation:4 decomposition:47 sgd:6 moment:8 liu:1 series:1 document:1 existing:3 recovered:1 current:1 protection:1 written:2 must:2 numerical:1 additive:1 j1:2 subsequent:1 designed:1 update:2 implying:1 fa9550:1 boosting:1 successive:2 simpler:1 zhang:2 accessed:1 mathematical:1 kvk2:2 c2:6 differential:8 become:3 kak22:1 yuan:1 manner:2 privacy:35 theoretically:1 multi:1 inspired:1 decomposed:3 spherical:1 resolve:1 overwhelming:1 increasing:1 becomes:2 spain:1 provided:1 moreover:1 bounded:5 notation:2 underlying:1 eigenspace:1 what:1 eigenvector:7 sudakov:1 finding:3 guarantee:12 every:2 ti:1 act:1 nutshell:1 prohibitively:1 k2:17 uk:3 unit:4 enjoy:1 producing:1 eigenpairs:1 before:1 understood:1 limit:2 consequence:1 despite:2 id:1 laurent:1 incoherence:3 ap:4 approximately:1 initialization:11 minimally:1 resembles:1 suggests:1 challenging:1 factorization:2 analytics:2 gone:1 range:2 directed:1 practical:6 accumulator:1 implement:1 procedure:6 empirical:3 universal:2 significantly:2 attain:1 matching:1 thought:1 suggest:1 cannot:5 onto:2 selection:2 operator:1 perros:1 storage:2 context:2 applying:3 restriction:1 measurable:1 deterministic:1 imposed:1 roth:1 maximizing:1 destroying:1 straightforward:3 hillar:1 shi:1 pomdp:1 decomposable:3 recovery:6 simplicity:1 formalized:1 orthonormal:4 population:5 handle:1 notion:1 kolda:2 construction:1 suppose:5 controlling:1 play:1 exact:2 annals:1 kuleshov:1 trick:1 element:4 trend:1 approximated:1 particularly:3 asymmetric:3 observed:1 role:1 preprint:2 tung:1 wang:4 hv:1 sun:2 monograph:1 convexity:1 complexity:8 ui:3 mu:1 argmaxl:1 weakly:1 depend:1 smart:1 upon:3 efficiency:2 f2:3 basis:2 k0:6 worsened:1 stacked:1 fast:3 query:2 zemel:1 emerged:1 solve:1 tightness:2 statistic:1 withstood:1 yiningwa:1 transform:1 noisy:7 ip:5 online:15 eigenvalue:6 propose:4 aro:1 product:1 remainder:1 neighboring:7 uci:1 relevant:1 tj1:1 achieve:1 intuitive:1 kv:6 differentially:13 convergence:2 requirement:5 produce:1 telgarsky:1 wider:1 derive:2 pose:1 keywords:1 eq:2 strong:2 recovering:1 c:1 come:1 implies:2 differ:1 direction:1 drawback:1 correct:1 stochastic:4 bader:1 centered:1 stringent:1 require:4 f1:3 really:1 preliminary:1 proposition:2 extension:5 strictly:2 hold:3 recap:1 considered:5 sufficiently:2 normal:3 exp:2 k3:1 algorithmic:2 claim:1 electron:1 achieves:1 adopt:3 smallest:1 released:1 estimation:2 proc:1 mayo:1 thakurta:1 sensitive:1 symmetrization:1 largest:2 correctness:1 successfully:1 tool:2 gaussian:12 always:2 super:1 rather:1 avoid:2 ej:5 corollary:3 derived:2 parafac:2 release:4 improvement:1 rank:12 greatly:2 adversarial:1 sense:2 dependent:1 streaming:8 entire:1 i1:4 provably:2 among:1 colt:5 retaining:2 special:2 yip:2 nicely:1 represents:2 kd2:1 unsupervised:1 yu:1 nearly:1 icml:1 future:1 np:5 richard:1 primarily:1 simultaneously:1 argmax:2 ourselves:1 delicate:1 n1:1 microsoft:1 detection:1 highly:1 dwork:3 deferred:1 yining:1 kvk:1 devoted:1 implication:1 kt:2 tuple:1 capable:3 closer:1 necessary:1 janzamin:2 orthogonal:6 indexed:1 iv:2 tree:1 desired:3 overcomplete:1 theoretical:4 modeling:2 earlier:1 w911nf:1 stewart:1 deviation:1 entry:3 undercomplete:1 successful:2 too:2 dependency:4 eec:1 calibrated:2 combined:1 fundamental:1 randomized:1 sensitivity:1 kut:3 siam:2 invertible:1 hopkins:1 d9:1 sketching:1 again:2 central:1 huang:3 worse:3 ek:5 converted:2 bold:1 sec:3 includes:1 satisfy:1 mp:1 vi:1 stream:3 performed:1 view:1 analyze:2 sup:4 recover:2 complicated:1 square:2 yield:1 peril:1 tolerating:1 kup:1 randomness:1 explain:2 parallelizable:1 definition:9 evaluates:1 pp:1 proof:5 recovers:5 irvine:1 sampled:2 proved:1 hsu:4 animashree:1 popular:6 hardt:1 lim:1 ut:22 improves:2 subtle:1 carefully:1 higher:3 supervised:2 improved:11 april:1 generality:1 furthermore:3 just:1 smola:1 sentation:1 hand:8 replacing:1 ei:6 artifact:1 reveal:1 quality:1 perhaps:1 facilitate:1 k22:1 verify:2 true:1 ccf:1 read:1 symmetric:10 goldfarb:1 round:2 noted:1 coincides:1 criterion:1 complete:1 demonstrate:1 cp:2 balcan:1 functional:1 jp:3 exponentially:1 volume:1 association:2 rmp:1 m1:1 extend:1 tail:1 mellon:1 significant:1 composition:1 vec:1 chaganty:1 rd:12 mathematics:1 similarly:1 calibration:3 entail:1 longer:1 whitening:6 pitassi:1 add:1 multivariate:1 recent:2 hide:1 belongs:2 commun:1 prime:1 scenario:2 store:2 n00014:1 meta:1 onr:1 inequality:2 discussing:1 preserving:2 greater:1 relaxed:2 additional:1 signal:1 u0:5 preservation:1 multiple:1 full:1 reduces:3 match:1 adapt:1 academic:1 sphere:5 post:1 niranjan:1 award:3 a1:4 variant:4 scalable:2 involving:1 cmu:1 expectation:1 arxiv:5 iteration:10 normalization:3 achieved:4 c1:6 fellowship:1 separately:1 sanitizer:1 releasing:1 unlike:1 rest:1 sure:1 massart:2 anandkumar:11 subgaussian:1 presence:1 independence:1 affect:1 escaping:1 regarding:1 idea:2 simplifies:1 ti1:1 t0:9 whether:1 pca:5 utility:6 render:1 returned:1 speaking:1 york:1 remark:6 matlab:1 useful:1 generally:1 eigenvectors:5 involve:1 ibragimov:1 amount:2 simplest:1 exist:2 nsf:1 shifted:1 estimated:1 lazaric:1 correctly:3 carnegie:1 key:2 d3:3 time1:1 relaxation:1 year:1 sum:3 tpm:2 run:2 talwar:1 powerful:1 swersky:1 place:1 family:2 reasonable:1 wu:1 frobenious:1 draw:3 appendix:5 scaling:1 bound:23 ki:7 followed:1 guaranteed:6 display:1 distinguish:1 quadratic:4 occur:1 constraint:5 infinity:1 orthogonality:1 x2:2 flat:1 software:1 u1:1 aspect:1 min:16 relatively:1 conjecture:1 department:2 poor:1 smaller:1 describes:1 increasingly:1 character:2 ur:8 across:1 kakade:3 son:1 intuitively:1 restricted:1 pr:5 heart:1 taken:1 ln:3 previously:1 remains:1 discus:2 deflation:9 mechanism:2 ge:3 tractable:2 fed:1 end:8 operation:2 decomposing:2 spectral:7 birg:1 appending:1 alternative:2 robustness:2 symmetrized:1 eigen:1 original:3 assumes:1 running:1 cf:1 top:1 dirichlet:1 exploit:1 uj:3 establish:1 classical:2 tensor:161 objective:5 question:2 added:2 occurs:1 degrades:1 concentration:1 dependence:2 said:3 gradient:3 dp:7 kekop:4 outer:1 topic:4 extent:1 trivial:2 sedghi:1 useless:1 liang:1 difficult:1 unfortunately:1 sharper:3 stoc:1 kamath:1 implementation:1 reliably:1 steurer:1 unknown:1 upper:1 descent:2 protecting:1 jin:1 immediate:1 extended:3 perturbation:11 varied:1 arbitrary:5 community:1 inferred:1 pair:4 required:1 kl:3 c3:6 california:1 established:1 barcelona:1 nip:6 usually:2 below:5 prototyping:1 candecomp:2 beating:1 regime:1 challenge:1 max:11 memory:18 power:49 event:1 critical:1 natural:3 difficulty:1 advanced:2 zhu:1 improve:1 orthogonally:3 imply:1 numerous:1 ready:1 incoherent:2 naive:1 health:2 probab:1 literature:1 sg:2 acknowledgement:1 asymptotic:2 afosr:1 loss:1 lecture:2 permutation:4 highlight:1 interesting:2 allocation:1 proven:2 foundation:2 sufficient:2 verification:1 consistent:1 vectorized:1 imposes:1 principle:2 foster:1 summary:1 repeat:1 placed:2 supported:1 side:1 weaker:1 wide:1 absolute:3 tolerance:4 benefit:1 dimension:12 world:1 symmetrize:2 suzuki:1 reinforcement:2 projected:1 simplified:1 adaptive:1 pth:1 far:3 polynomially:1 transaction:1 approximate:4 compact:1 reveals:1 xi:2 latent:5 lepski:1 additionally:1 ku:1 robust:14 correlated:1 career:1 symmetry:1 improving:3 du:1 poly:1 aistats:1 pk:2 dense:1 main:3 linearly:5 privately:2 big:1 noise:50 montanari:1 nothing:1 fair:1 x1:2 referred:1 cubic:1 deployed:2 fashion:1 tomioka:1 sub:5 explicit:1 wish:1 exponential:1 lie:1 third:1 learns:1 theorem:21 kop:6 kuk2:1 bad:1 maxi:1 dk:2 admits:1 deflated:2 concern:2 closeness:1 intractable:2 exists:6 effectively:1 importance:1 magnitude:5 conditioned:1 nk:1 gap:2 rankness:1 chen:1 rd1:1 logarithmic:1 simply:1 saddle:1 jacm:1 hakeem:1 scalar:1 recommendation:1 springer:1 satisfies:4 acm:2 formulated:1 careful:1 rm1:1 replace:1 absence:1 price:1 hard:6 typical:1 uniformly:4 lemma:1 principal:2 jovanovich:1 svd:2 gauss:1 succeeds:1 formally:1 support:1 d1:5 ex:5 |
6,078 | 6,499 | Crowdsourced Clustering: Querying Edges vs
Triangles
Ramya Korlakai Vinayak
Department of Electrical Engineering
Caltech, Pasadena
ramya@caltech.edu
Babak Hassibi
Department of Electrical Engineering
Caltech, Pasadena
hassibi@systems.caltech.edu
Abstract
We consider the task of clustering items using answers from non-expert crowd
workers. In such cases, the workers are often not able to label the items directly,
however, it is reasonable to assume that they can compare items and judge whether
they are similar or not. An important question is what queries to make, and we
compare two types: random edge queries, where a pair of items is revealed, and
random triangles, where a triple is. Since it is far too expensive to query all possible
edges and/or triangles, we need to work with partial observations subject to a fixed
query budget constraint. When a generative model for the data is available (and we
consider a few of these) we determine the cost of a query by its entropy; when such
models do not exist we use the average response time per query of the workers
as a surrogate for the cost. In addition to theoretical justification, through several
simulations and experiments on two real data sets on Amazon Mechanical Turk,
we empirically demonstrate that, for a fixed budget, triangle queries uniformly
outperform edge queries. Even though, in contrast to edge queries, triangle queries
reveal dependent edges, they provide more reliable edges and, for a fixed budget,
many more of them. We also provide a sufficient condition on the number of
observations, edge densities inside and outside the clusters and the minimum
cluster size required for the exact recovery of the true adjacency matrix via triangle
queries using a convex optimization-based clustering algorithm.
1
Introduction
Collecting data from non-expert workers on crowdsourcing platforms such as Amazon Mechanical
Turk, Zooinverse, Planet Hunters, etc. for various applications has recently become quite popular.
Applications range from creating a labeled dataset for training and testing supervised machine
learning algorithms [1, 2, 3, 4, 5, 6] to making scientific discoveries [7, 8]. Since the workers on
the crowdsourcing platforms are often non-experts, the answers obtained will invariably be noisy.
Therefore the problem of designing queries and inferring quality data from such non-expert crowd
workers is of great importance.
As an example, consider the task of collecting labels of images, e.g, of birds or dogs of different
kinds and breeds. To label the image of a bird, or dog, a worker should either have some expertise
regarding the bird species and dog breeds, or should be trained on how to label each of them. Since
hiring experts or training non-experts is expensive, we shall focus on collecting labels of images
through image comparison followed by clustering. Instead of asking a worker to label an image
of a bird, we can show her two images of birds and ask: ?Do these two birds belong to the same
species?"(Figure 1(a)). Answering this comparison question is much easier than the labeling task
and does not require expertise or training. Though different workers might use different criteria for
comparison, e.g, color of feathers, shape, size etc., the hope is that, averaged over the crowd workers,
we will be able to reasonably resolve the clusters (and label each).
Consider a graph of n images that needs to be clustered, where each pairwise comparison is an ?edge
query?. Since the number of edges grows as O(n2 ), it is too expensive to query all edges. Instead,
we want to query a subset of the edges, based on our total query budget, and cluster the resulting
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a) Do these two birds belong to the same species? (b) Which of these birds belong to the same species?
Figure 1: Example of (a) an edge query and (b) a triangle query.
partially observed graph. Of course, since the workers are non-experts, their answers will be noisy
and this should be taken into consideration in designing the queries. For example, it is not clear what
the best strategy to choose the subsets of edges to be queried is.
1.1 Our Contribution
In this work we compare two ways of partially observing the graph: random edge queries, where
a pair of items is revealed for comparison, and random triangle queries, where a triplet is revealed.
We give intuitive generative models for the data obtained for both types of queries. Based on these
models we determine the cost of a query to be its entropy (the information obtained from the response
to the query). On real data sets where such a generative model may not be known we use the average
response time per query as a surrogate for the cost of the query. To fairly compare the use of edge
vs. triangle queries we fix the total budget, defined as the (aforementioned) cost per query times the
total number of queries. Empirical evidence, based on extensive simulations, as well as two real
data sets (images of birds and dogs, respectively), strongly suggests that, for a fixed query budget,
querying for triangles significantly outperforms querying for edges. Even though, in contrast to edge
queries that give information on independent edges, triangle queries give information on dependent
edges, i.e., edges that share vertices, we (theoretically and empirically) argue that triangle queries are
superior because (1) they allow for far more edges to be revealed, given a fixed query budget, and (2)
due to the self-correcting nature of triangle queries, they result in much more reliable edges.
Furthermore, for a specific convex optimization-based clustering algorithm, we also provide theoretical guarantee for the exact recovery of the true adjacency matrix via random triangle queries, which
gives a sufficient condition on the number of queries, edge densities inside and outside
the clusters
?
and the minimum cluster size. In particular, we show that the lower bound of ?( n) on the cluster
size still holds even though the edges revealed via triangle queries are not independent.
1.2 Problem Setup
Consider n items with K disjoint classes/clusters plus outliers (items that do not belong to any
clusters). Consider a graph with these n items as nodes. In the true underlying graph G ? , all the items
in the same cluster are connected to each other and the items that are not in the same cluster are
not connected to each other. We do not have access to G ? . Instead we have a crowdsourced query
mechanism that can be used to observe a noisy and partial snapshot G obs of this graph. Our goal is to
find the cluster assignments from G obs . We consider the following two querying
methods:
Random Edge Query: We sample E edges uniformly at random from n2 possible edges. Figure 1(a)
shows an example of an edge query. For each edge observation, there are two possible configurations:
(1) Both items are similar, denoted by ll, (2) The items are not similar, denoted by
lm.
Random Triangle Query: We sample T triangles uniformly at random from n3 possible triangles.
Figure 1(b) shows an example of a triangle query. For each triangle observation, there are five
possible configurations (Figure 2):(1) All items are similar, denoted by lll, (2) Items 1 and 2 are
similar, denoted by llm, (3) Items 1 and 3 are similar, denoted by lml, (4) Items 2 and 3 are similar,
denoted by mll, (5) None are similar, denoted by lmj.
1"
2"
lll!
1"
3" 2"
llm!
1"
1"
3" 2"
3" 2"
3" 2"
3"
lml!
mll!
lmj!
(a)"Allowed"
1"
1"
1"
2"
3" 2"
1"
3" 2"
(b)"Not"allowed"
Figure 2: Configurations for a triangle query that are (a) observed and (b) not allowed.
2
3"
Pr(y|x)
3
lll
llm
2
2
lmj
lll
p + 3p (1 ? p)
pq
q3
2
2
2
llm
p(1 ? p)
p(1 ? q) + (1 ? p)q + 2pq(1 ? q)
q(1 ? q)2
2
lml
p(1 ? p)
(1 ? p)q(1 ? q)
q(1 ? q)2
2
p(1 ? p)
(1 ? p)q(1 ? q)
q(1 ? q)2
mll
3
2
3
lmj
(1 ? p)
(1 ? p)(1 ? q)
(1 ? q) + 3q 2 (1 ? q)
Table 1: Query confusion matrix for the triangle block model for the homogeneous case.
1.3 Related Works
[9, 10, 11, 12, 13, 14] and references therein focus on the problem of inferring true labels from
crowdsoruced multiclass labeling. The common setup in these problems is as follows: A set of
items are shown to workers and labels are elicited from them. Since the workers give noisy answers,
each item is labeled by multiple workers. Algorithms based on Expectation-Maximization [14] for
maximum likelihood estimation and minimax entropy based optimization [12] have been studied for
inferring the underlying true labels. In our setup we do not ask the workers to label the items. Instead
we use comparison between items to find the clusters of items that are similar to each other.
[15] considers the problem of inferring the complete clustering on n images from a large set of
clustering on smaller subsets via crowdsourcing. Each HIT (Human Intelligent Task) is designed such
thatall of them share a subset of images to ensure overlapping. Each HIT has M images and all the
M
2 comparisons are made. Each HIT is then assigned to multiple workers to get reliable answers.
These clustering are then combined using an algorithm based on variational Bayesian inference. In
our work we consider a different setup, where either pairs or triples of images are compared by the
crowd to obtain a partial graph on the images which can be clustered.
[16] considers a convex approach to graph clustering with partially observed adjacency matrices, and
provides an example of clustering images by crowdsourcing pairwise comparisons. However, it does
not consider other types of querying such as triangle queries. In this work, we extend the analysis
in [16] and show that similar performance guarantee holds for clustering via triangle queries.
Another interesting line of work is learning embeddings and kernels through triplet comparison tasks
in [17, 18, 19, 20, 21, 22] and references therein. The ?triplet comparison? task in these works is of
type: ?Is a closer to b or to c??, with two possible answers, to judge the relative distances between the
items. On the other hand, a triangle query in our work has five possible answers (Figure 1(b)) that
gives a clustering (discrete partitioning) of the three items.
2
Models
P
Probability of observing a particular configuration y is given by: Pr(y) = x?X Pr(y|x)Pr(x),
where x is the true configuration and X is the set of true configurations. Let Y be the set of all
observed configurations. Each query has a |Y| ? |X | confusion
P matrix [Pr(y|x)] associated to it.
Note that the columns of this confusion matrix sum to 1, i.e y?Y Pr(y|x) = 1.
2.1 Random Edge Observation Models
For the random edge query case, there are two observation configurations, Y = {ll, lm} where lm
denotes ?no edge? and ll denotes ?edge?.
One-coin Edge Model: Assume all the queries are equally hard. Let the ? be the probability of
answering a question wrong. Then Pr(ll|ll) = Pr(lm|lm) = 1 ? ?, Pr(lm|ll) = Pr(ll|lm) = ?.
This model is inspired by the one-coin Dawid-Skene Model [23], which is used in inference for item
label elicitation tasks. This is a very simple model and does not capture the difficulty of a query
depending on which clusters the items in the query belong to. In order to incorporate these differences
we consider the popular Stochastic Block model (SBM) [24, 25] which is one of the most widely
used model for graph clustering.
Stochastic Block Model (SBM): Consider a graph on n nodes with K disjoint clusters and outliers.
Any two nodes i and j are connected (independent of other edges) with probability p if they belong
to the same cluster and with probability q otherwise. That is, Pr(ll|ll) = p, Pr(lm|ll) = 1 ? p,
Pr(ll|lm) = q and Pr(lm|lm) = 1 ? q. We assume that the density of the edges inside the clusters
is higher than that between the clusters, that is, p > q.
2.2 Random Triangle Observation Models
For the triangle query model, there are five possible observation configurations (Figure 2), Y =
{lll, llm, lml, mll, lmj}.
One-coin Triangle Model: Let each question be answered correctly with probability 1 ? ?, and
3
Pr(y|x)
lll
llm
3
2
lmj
3
lll
p /zlll
pq /zllm
q /zlmj
p(1 ? p)2 /zlll
p(1 ? q)2 /zllm
q(1 ? q)2
llm
lml
p(1 ? p)2 /zlll (1 ? p)q(1 ? q)/zllm q(1 ? q)2 /zlmj
p(1 ? p)2 /zlll (1 ? p)q(1 ? q)/zllm q(1 ? q)2 /zlmj
mll
lmj
(1 ? p)3 /zlll
(1 ? p)(1 ? q)2 /zllm
(1 ? q)3 /zlmj
Table 2: Query confusion matrix for the conditional block model for the homogeneous case.
when wrongly answered, all the other configurations are equally confusing. So, Pr(lll|lll) = 1 ? ?
and Pr(llm|lll) = Pr(lml|lll) = Pr(mll|lll) = Pr(lmj|lll) = ?/4 and so on. This model, as
in the case of the one-coin model for edge query, does not capture the differences in difficulty for
different clusters. In order to include the differences in confusion between different clusters, we
consider the following observation models for a triangle query.
For these 3 items in the triangle query, the edges are first generated from the SBM. This can give rise
to 8 configurations, out of which 5 are allowed as an answer to triangle query while the rest 3 are not
allowed (Figure 2). The two models differ in how they handle the configurations that are not allowed,
and are described below:
Triangle Block Model (TBM): In this model we assume that a triangle query helps in correctly
resolving the configurations that are not allowed. So, when the triangle generated from the SBM
takes one of the 3 non-allowed configurations, it is mapped to the true configuration. This gives a
5 ? 5 query confusion matrix which is given in Table 1. Note that the columns for lml and mll can
be filled in a similar manner to that of llm.
Conditional Block Model (CBM): In this model when a non-allowed configuration is encountered,
it is redrawn again. This is equivalent to conditioning on the allowed configurations. Define the
normalizing factors, zlll := 3p3 ? 3p2 + 1, zllm := 3pq 2 ? 2pq ? q 2 + 1, zllm := 3q 3 ? 3q 2 + 1 .
The 5 ? 5 query confusion matrix which is given in Table 2.
Remark: Note that the SBM (and hence the derived models) can be made more general by considering
different edge probabilities Pii for cluster i and Pij = Pji between clusters i 6= j.
Some intuitive properties of the triangle query models described in this section are:
1. If p > q, then the diagonal term will dominate any other term in a row. That is Pr(lll|lll) >
Pr(lll|? 6= lll), Pr(llm|llm) > Pr(llm|? 6= llm) and so on.
2. If p > 1/2 > q, then the diagonal term will dominate the other terms in the column, i.e,
Pr(lll|lll) > Pr(llm|lll) = Pr(lml|lll) = Pr(mll|lll) > Pr(lmj|lll) etc.
3. When there is a symmetry between the items, the observation probability should be the same. That
is, if the true configuration is llm, then observing lml and mll should be equally likely as item1
and item2 belong to the same cluster and so on. This property will hold good in the general case
as well except for when the true configuration is lmj. In this case, the probability of observing
llm, lml and mll can be different as it depends on the clusters to which items 1, 2 and 3 belong.
2.3 Adjacency Matrix: Edge Densities and Edge Errors
The adjacency matrix, A = AT of a graph can be partially filled by querying a subset of edges.
Since we query edges randomly, most of the edges are seen only once. Some edges might get queried
multiple times, in which case, we randomly pick one of them. Similarly we can also partially fill
the adjacency matrix from triangle queries. We fill the unobserved entries of the adjacency matrix
with zeros. We can perform clustering on A to obtain a partition of items. The true underlying graph
G ? has perfect clusters (disjoint cliques). So, the performance of clustering on A depends on how
noisy it is. This in turn depends on the probability of error for each revealed edge in A, i.e, what is
the probability that a true edge was registered as no-edge and vice versa. The hope is that triangle
queries help workers to resolve the edges better and hence have less errors among the revealed edges
than those obtained from edge queries.
n
If we make E edge queries, then the probability of observing an edge
is,
r
=
E/
. If we make T
2
n
triangle queries, the probability of observing an edge is rT = 3T / 2 . Let rp (rT pT ) and rq (rT qT )
be the edge probability in side the clusters and between the clusters respectively, in A which is
partially filled via edge (triangle) queries. For simplicity consider a graph with K clusters of size m
each (n = Km). The probability that a randomly chosen edge in A filled via edge query is in error
can be computed as: pedge
err := (1 ? rp) (m ? 1)/(n ? 1) + rq (n ? m)/(n ? 1). Similarly, we can
?
edge
write p?
err . Under reasonable conditions on the parameters involved, perr < perr .
4
Fraction of Entries in Error
One?coin Model,
r=0.2, q = 1?p
Triangle Block Model,
r=0.2, q = 0.25
0.5
0.5
0.4
E
TE
0.3
TB
0.4
E
TE
0.3
TB
One?coin Model,
r=0.3, q = 1?p
Conditional Block Model,
r=0.2, q = 0.25
0.4
E
TE
0.3
TB
0.4
E
TE
0.3
TB
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0
0.7
p
0.8
0.9
0
0.7
p
0.8
0.9
0
Triangle Block Model,
r=0.3, q = 0.25
Conditional Block Model,
r=0.3, q = 0.25
0.5
0.5
0.7
p
0.8
0.9
0
E
TE
0.2
TB
0.1
0.7
p
0.8
0.9
0
E
TE
0.2
TB
0.1
0.7
p
0.8
0.9
0
0.7
p
0.8
0.9
Figure 3: Fraction of entries in error in the matrix recovered via Program 4.1.
For example, in the case of One-coin model, for edge qurey, rp = r (1 ? ?) and rq = r?. For triangle
query, rT pT = rT (1 ? 3?/4) and rT qT = rT ?/2. If rT < 2r, we have rT qT < rq and rT pT > rp,
edge
and hence p?
err < perr .
For the TBM, when p > 1/2 > q, with r < rT < r/(1 ? q), we get rT pT > rp and rT qT < rq,
edge
and hence p?
err < perr . For the CBM, when p > 1/2 > q, under reasonable assumptions on r,
rT qT < rq, but depending on the values of r and rT , rT pT can get below rp. If the decrease in
edge probability between the clusters is large enough to overcome the fall in edge density inside the
edge
clusters, then p?
err < perr .
In summary, when A is filled by triangle queries, the edge density between the clusters decreases and
the overall number of edge errors decreases (we observe this in real data as well, see Table 3). Both
of these are desirable for clustering algorithms that try to approximate the minimum cut to find the
clusters like spectral clustering.
3
Value of a Query
To make a meaningful comparison between edge queries and triangle queries, we need to fix a budget.
Suppose we have a budget to make E edge queries. To find the number of triangle queries that can
be made with the same budget, we need to define the value (cost) of a triangle query. Although a
triangle query has 3 edges, they are not independent and hence its relative cost is less than that of
making 3 random edge queries. Thus we need a fair way to compare the value of a triangle query to
that of an edge query.
P
Let s ? [0, 1]|Y| ,
y?Y syP= 1 be the probability mass function (pmf) of the observation in a
query, with sy := Pr(y) = x?X Pr(y|x)Pr(x). We define the value of aP
query as the information
obtained from the observation, which is measured by its entropy: H(s) = ? i?Y si log(si ). Ideally,
the cost of a query should be proportional to the amount of information it provides. So, if E is the
number of edge queries, then the number of triangle queries we can make with the same budget is:
TB = E ? HE /H? .
We should remark that detetrmining the above cost requires knowledge of the generative model of the
graph, which may not be available for empirical data sets. In such situations, a very reasonable cost
is the relative time it takes for a worker to respond to a triangle query, compared to an edge query. (In
this manner, a fixed budget means a fixed amount of time for the queries to be completed.) A good
rule of thumb, which is widely supported by empirical data, is the cost of 1.5, ostensibly because in
triangle queries workers need to study three images, rather than two, and so it takes them 50% longer
to respond. The end result is that, for a fixed budget, triangle queries reveal twice as many edges.
4
Guaranteed Recovery of the True Adjacency Matrix
In this section we provide a sufficient condition for the full recovery of the adjacency matrix
corresponding to the underlying true G ? from partially observed noisy A filled via random triangle
queries. We consider the following convex program from [16]:
minimize kLk? + ?kSk1
(4.1)
L,S
s. t. 1 ? Li,j ? Si,j ? 0 for all i, j ? {1, 2, . . . n}, Li,j = Si,j whenever Ai,j = 0,
n
X
Lij ? |R|
i,j=1
where k.k? is the nuclear norm (sum of the singular values of the matrix), and k.k1 is the l1 -norm
(sum of absolute values of the entries of the matrix) and ? ? 0 is the regularization parameter. L
is the low-rank matrix corresponding to the true cluster structure, S is the sparse error matrix that
accounts only for the missing edges inside the clusters and |R| is the size of the cluster region.
5
When A is filled using a subset of random edge queries, under the SBM with parameters
{n, nmin , K, p, q}, [16] provides the following sufficient condition for the guaranteed recovery
of the true G ? :
p
? p
?
1
nmin r (p ? q) ? ? 2 n rq(1 ? rq) + 2 nmax rp(1 ? rp) + rq(1 ? rq),
(4.2)
?
where nmin and nmax are the sizes of the smallest and the largest clusters respectively. We extend the
analysis in[16] to the case when A is filled via a subset of random triangle queries, and obtain the
following sufficient condition:
Theorem 1 If the following condition holds:
1
nmin rT (pT ? qT ) ?
?
r
r
?
?
qT
qT
pT
pT
qT
qT
? 3 2 n rT (1 ? rT ) + 2 nmax rT
(1 ? rT
) + rT (1 ? rT )
3
3
3
3
3
3
then Program 4.1 succeeds in recovering the true G ? with high probability.
When A is filled using random edge queries, the entries are independent of each other (since the
edges are independent in the SBM). When we use triangle queries to fill A, this no longer holds as
the 3 edges filled from a triangle query are not independent. Due to the limited space, we present
only the key idea of our proof: The analysis in [16] relies on the independence of entries of A to use
Bernstein-type concentration results for the sum of independent random variables and the bound on
the spectral norm of random matrix with independent entries. We make the following observation:
Split A filled via random triangle queries into three parts, A = A1 + A2 + A3 . For each triangle
query, allocate one edge to each part randomly. If an edge gets queried as a part of multiple triangle
queries, keep one of them randomly. Each Ai now contains independent entries. The edge density
in Ai is rT pT /3 and rT qT /3 inside the clusters and outside respectively. This?allows us to use
the results on concentration of sum of independent random variables and the O( n) bound on the
spectral norm of random matrices, with a penalty due to triangle inequality for spectral norm.
It can be seen that, when the number of revealed edges is the same (rT = r) and the probability of
correctly identifying edges is the same (pT = p and 1 ? qT = 1 ? q), then the reovery condition
of Theorem 1 is worse than that of (4.2). (This is expected, since triangle queries yield dependent
edges.) However, it is overcompensated by the fact that triangle queries result in more reliable edges
(pT ? qT > p ? q) and also reveal more edges (rT > r, since the relative cost is less than 3).
To illustrate this, consider a graph on n = 600 nodes with K = 3 clusters of equal size m = 200.
We generate the adjacency matrices from different models in Section 2 for varying p from 0.65 to 0.9.
For the one-coin models, 1 ? ? = p.
?For the rest of the models q = 0.25. We run the improved convex
program (4.1) by setting ? = 1/ n. Figure 3 shows the fraction of the entries in the recovered
matrix that are wrong compared to the true adjacency matrix for r = 0.2 and 0.3 (averaged over 5
runs; TE = dE/3e and TB = EHE /H? ). We note that the error drops significantly when A is filled
via triangle queries than via edge queries.
5
Performance of Spectral Clustering: Simulated Experiments
We generate adjacency matrices from the edge query and the triangle query models (Section 2) and
run the spectral clustering algorithm [26] on them. We compare the output clustering with the ground
truth via variation of information (VI) [27] which is defined for two clusterings (partitions) of a
dataset and has information theoretical justification. Smaller values of VI indicate a closer match
and a VI of 0 means that the clusterings are identical. We compare the performance of the spectral
clustering algorithms on the partial adjacency matrices obtained from querying: (1) E = dr n2 e
random edges, (2) TB = E ? HE /H? random triangles, which has the same budget as querying
E edges and (3) TE = dE/3e < TB random triangles, which has same number of edges as in the
adjacency matrix obtained by querying E edges.
Varying Edge Density Inside the Clusters: Consider a graph on n = 450 nodes with K = 3
clusters of equal size m = 150. We vary edge density inside the cluster p from 0.55 to 0.9. For the
one-coin models, 1 ? ? = p, and q = 0.25 for the rest. Figure 4 shows the performance of spectral
clustering for r = 0.15 and r = 0.3 (averaged over 5 runs).
Varying Cluster Sizes: Let N = 1200. Consider a graph with K clusters of equal sizes m =
bN/Kc and n = K m. We vary K from?2 to 12 which varies the cluster sizes from 600 (large
clusters) to 100 (small clusters, note that 1200 ? 35). We set p = 0.7. For the one-coin models
6
VI (Variation of Information)
One?coin,
r=0.15, q = 1?p
Triangle Block Model
r=0.15, q = 0.25
2.5
2
E
TE
1.5
TB
0.8
0.6
TB
0.4
0.5
0.2
0.6
0.8
p
1
1.5
E
TE
1
0
Conditional Block Model
r=0.15, q = 0.25
1
0
One?coin
r=0.3, q = 1?p
Triangle Block Model
r=0.3, q = 0.25
2.5
1
E
TE
2
E
TE
TB
1.5
TB
1
E
TE
0.3
E
TE
0.6
TB
0.2
TB
0.4
0.1
0.5
0.8
p
0.8
1
0.5
0.6
Conditional Block Model
r=0.3, q = 0.25
0.4
0
0.6
0.8
p
1
0
0.6
0.8
p
1
0
0.2
0.6
0.8
p
0
1
0.6
0.8
p
1
VI (Variation of Information)
Figure 4: VI for Spectral Clustering output for varying edge density inside the clusters.
One?coin
r=0.2, p = 1?q = 0.7
5
Triangle Block Model
r=0.2, p = 0.7, q = 0.25
4
4
E
TE
3
TB
3
One?coin
r=0.3, p = 1? q = 0.7
Conditional Block Model
r=0.2, p = 0.7, q = 0.25
4
4
E
TE
E
TE
3
TB
3
Triangle Block Model
r=0.3, p = 0.7, q = 0.25
2
E
T
E
1.5
TB
TB
Conditional Block Model
r=0.3, p = 0.7, q = 0.25
2
E
TE
1.5
TB
E
TE
TB
2
2
2
1
1
1
1
1
0.5
0.5
2
1
0
0
5
K
10
0
0
5
K
10
0
0
5
K
0
0
10
5
K
10
0
0
5
K
10
0
0
5
K
10
Figure 5: VI for Spectral Clustering output for varying number of clusters (K).
1 ? ? = p and q = 0.25 for the rest. Figure 5 shows the performance of spectral clustering for
r = 0.2 and 0.3. The performance is significantly better with triangle queries compared to that with
edge queries.
6
Experiments on Real Data
We use Amazon Mechanical Turk as crowdsourcing platform. For edge queries, each HIT (Human
Intelligence Task) has 30 queries of random pairs, a sample is shown in Figure 1(a). For triangle
queries, each HIT has 20 queries, with each query having 3 random images, a sample is shown in
Figure 1(b). Each HIT is answered by a unique worker. Note that we do not provide any examples
of different classes or any training to do the task. We fill A as described in Section 2.3 and run the
k-means, the Spectral Clustering and Program 4.1 followed by Spectral Clusteirng on it. Since we do
not know the model parameters and hence have no access to the entropy information, we can use the
the average time taken as the ?cost? or value of the query. For E edge comparisons, the equivalent
number of triangle comparisons would be T = E ? tE /t? , where tE and t? are average time taken
to answer an edge query and a triangle query respectively. We consider two datasets:
1. Dogs3 dataset has images of the following 3 breeds of dogs from the Stanford Dogs Dataset [28]:
Norfolk Terrier (172), Toy Poodle (150) and Bouvier des Flanders (151), giving a total of 473
dogs images. On an average a worker took tE = 8.4s to answer an edge query and t? = 11.7s to
answer a triangle query.
2. Birds5 dataset has 5 bird species from CUB-200-2011 dataset [29]: Laysan Albatross (60), Least
Tern (60), Artic Tern (58), Cardinal (57) and Green Jay (57). We also add 50 random species as
outliers, giving us a total if 342 bird images. On an average, workers took tE = 8.3s to answer
one edge query and t? = 12.1s to answer a triangle query.
Details of the data obtained from edge query and triangle query experiments is summarized in Table 3.
Note that the error in the revealed edges drop significantly for triangle queries.
For the Dogs3 dataset, the empirical edge densities inside and between the clusters for A obtained
from the edge queries (P?E ) and the triangle queries (P?T ) is:
"
#
"
#
0.7577 0.1866 0.2043
0.7139 0.1138 0.1253
?
?
PE = 0.1866 0.6117 0.2487 , PT = 0.1138 0.6231 0.1760 .
0.2043 0.2487 0.7391
0.1253 0.1760 0.7576
E: Edge, T: ?
Dogs3, Edge Query
Dogs3, ? Query
Dogs3, ? Query
# Workers
300
150
320
# Unique Edges
E 0 = 8630
3TE0 = 8644
3T 0 = 17, 626
% of Edges Seen
7.73%
7.74%
15.79%
Birds5, Edge Query
300
E 0 = 8319
14.27%
Birds5, ? Query
155
3TE0 = 8600
14.74%
Birds5, ? Query
285
3T 0 = 14, 773
25.34%
Table 3: Summary of the data colleced in the real experiments.
7
% of Edge Errors
25.2%
19.66%
20%
14.82%
10.96%
11.4%
Query (E: Edge, T: ?)
E 0 = 8630
3TE0 = 8644
3T 0 = 17626
k-means
0.8374 ? 0.0121 (K=2)
0.6675 ? 0.0246 (K=3)
0.3268 ? 0 (K=3)
Spectral Clustering
0.6972 ? 0 (K = 3)
0.5690 ? 0 (K=3)
0.3470 ? 0 (K=3)
Convex Program
0.5176 ? 0 (K=3)
0.4605 ? 0 (K = 3)
0.2279 ? 0 (K = 3)
Table 4: VI for clustering output by k-means and spectral clustering for the Dogs3 dataset.
k-means
Spectral Clustering
Convex Program
Query
0
E = 8319
1.4504 ? 0.0338 (K = 2) 1.2936 ? 0.0040 (K = 4) 1.0392 ? 0 (K = 4)
3TE0 = 8600
1.1793 ? 0.0254 (K = 3)
1.1299 ? 0(K = 4)
0.9105 ? 0 (K=4)
3T 0 = 14, 773
0.7989 ? 0 (K = 4)
0.8713 ? 0 (K = 4)
0.9135 ? 0 (K = 4)
Table 5: VI for clustering output by k-means and spectral clustering for the Birds5 dataset.
For the Birds5 dataset, the emprical edge densities within and between various clusters in A filled
via edge queries (P?E ) and triangle queries (P?T ) are:
0.801
?0.304
?
0.208
?E = ?
P
?
?0.016
?0.032
0.100
?
0.304
0.778
0.656
0.042
0.131
0.123
0.208
0.656
0.912
0.062
0.094
0.096
0.016
0.042
0.062
0.855
0.154
0.110
0.032
0.131
0.094
0.154
0.958
0.158
?
?
0.100
0.786
0.123?
?0.207
?
?
0.096? ?
?0.151
? , PT = ?
0.110?
?0.011
?0.021
0.158?
0.224
0.058
0.207
0.797
0.625
0.023
0.047
0.1
0.151
0.625
0.865
0.024
0.06
0.071
0.011
0.023
0.024
0.874
0.059
0.078
0.021
0.047
0.06
0.059
0.943
0.08
?
0.058
0.1 ?
?
0.071?
?.
0.076?
0.08 ?
0.182
As we see the triangle queries give rise to an adjacency matrix with significantly less confusion
across the clusters (compare the off-diagonal entries in P?E and P?T ).
Tables 4 and 5 show the performance of clustering algorithms (in terms of variation of information)
for the two datasets. The no. of clusters found is given in brackets. We note that for both the datasets,
the performance is significantly better with triangle queries than with edge queries. Furthermore,
even with less triangle queries (3TE0 ? E) than that is allowed by the budget, the clustering obtained
is better compared to edge queries.
7
Summary
In this work we compare two ways of querying for crowdsourcing clustering using non-experts:
random edge comparisons and random triangle comparisons. We provide simple and intuitive models
for both. Compared to edge queries that reveal independent entries of the adjacency matrix, triangle
queries reveal dependent ones (edges in a triangle share a vertex). However, due to their errorcorrecting capabilities, triangle queries result in more reliable edges and, furthermore, because the
cost of a triangle query is less than that of 3 edge queries, for a fixed budget, triangle queries reveal
many more edges. Simulations based on our models, as well as empirical evidence strongly support
these facts. In particular, experiments on two real datasets suggests that clustering items from random
triangle queries significantly outperforms random edge queries when the total query budget is fixed.
We also provide theoretical guarantee for the exact recovery of the true adjacency matrix using
random triangle queries. In the future we will focus on exploiting the structure of triangle queries via
tensor representations and sketches, which might further improve the clustering performance.
References
[1] Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni,
and Linda Moy. Learning from crowds. J. Mach. Learn. Res., 11:1297?1322, August 2010.
[2] Rion Snow, Brendan O?Connor, Daniel Jurafsky, and Andrew Y. Ng. Cheap and fast?but is it good?:
Evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on
Empirical Methods in Natural Language Processing, EMNLP ?08, pages 254?263, 2008.
[3] Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. reCAPTCHA:
Human-based character recognition via web security measures. Science, 321(5895):1465?1468, 2008.
[4] A. Sorokin and D. Forsyth. Utility data annotation with Amazon Mechanical Turk. In Computer Vision
and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on,
pages 1?8. IEEE, June 2008.
[5] Peter Welinder, Steve Branson, Serge Belongie, and Pietro Perona. The multidimensional wisdom of
crowds. In Neural Information Processing Systems Conference (NIPS), 2010.
[6] Jinfeng Yi, Rong Jin, Anil K. Jain, Shaili Jain, and Tianbao Yang. Semi-crowdsourced clustering:
Generalizing crowd labeling by robust distance metric learning. In Neural Information Processing Systems
Conference (NIPS), 2012.
8
[7] Robert Simpson, Kevin R. Page, and David De Roure. Zooniverse: Observing the world?s largest citizen
science platform. In Proceedings of the 23rd International Conference on World Wide Web, WWW ?14
Companion, 2014.
[8] Chris Lintott, Megan E. Schwamb, Charlie Sharzer, Debra A. Fischer, Thomas Barclay, Michael Parrish,
Natalie Batalha, Steve Bryson, Jon Jenkins, Darin Ragozzine, Jason F. Rowe, Kevin Schawinski, Rovert
Gagliano, Joe Gilardi, Kian J. Jek, Jari-Pekka P??kk?nen, and Tjapko Smits. Planet hunters: New kepler
planet candidates from analysis of quarter 2, 2012. cite arxiv:1202.6007Comment: Submitted to AJ.
[9] David R. Karger, Sewoong Oh, and Devavrat Shah. Iterative learning for reliable crowdsourcing systems.
In Neural Information Processing Systems Conference (NIPS), 2011.
[10] David R. Karger, Sewoong Oh, and Devavrat Shah. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research, 62(1):1?24, 2014.
[11] Aditya Vempaty, Lav R. Varshney, and Pramod K. Varshney. Reliable crowdsourcing for multi-class
labeling using coding theory. CoRR, abs/1309.3330, 2013.
[12] Denny Zhou, Sumit Basu, Yi Mao, and John C. Platt. Learning from the wisdom of crowds by minimax
entropy. In Advances in Neural Information Processing Systems 25, pages 2195?2203. 2012.
[13] Qiang Liu, Jian Peng, and Alexander T Ihler. Variational inference for crowdsourcing. In Neural
Information Processing Systems Conference (NIPS). 2012.
[14] Yuchen Zhang, Xi Chen, Dengyong Zhou, and Michael I. Jordan. Spectral methods meet EM: A provably
optimal algorithm for crowdsourcing. In Neural Information Processing Systems Conference (NIPS), 2014.
[15] Ryan G. Gomes, Peter Welinder, Andreas Krause, and Pietro Perona. Crowdclustering. In Advances in
Neural Information Processing Systems 24, pages 558?566. 2011.
[16] Ramya Korlakai Vinayak, Samet Oymak, and Babak Hassibi. Graph clustering with missing data: Convex
algorithms and analysis. In Neural Information Processing Systems Conference (NIPS), 2014.
[17] Omer Tamuz, Ce Liu, Serge Belongie, Ohad Shamir, and Adam Tauman Kalai. Adaptively learning the
crowd kernel. CoRR, abs/1105.1033, 2011.
[18] Michael Wilber, Sam Kwak, and Serge Belongie. Cost-effective hits for relative similarity comparisons. In
Human Computation and Crowdsourcing (HCOMP), Pittsburgh, November 2014.
[19] Eric Heim, Hamed Valizadegan, and Milos Hauskrecht. Machine Learning and Knowledge Discovery in
Databases: European Conference, ECML PKDD 2014, chapter Relative Comparison Kernel Learning with
Auxiliary Kernels, pages 563?578. Springer Berlin Heidelberg.
[20] L. van der Maaten and K. Weinberger. Stochastic triplet embedding. In Machine Learning for Signal
Processing (MLSP), 2012 IEEE International Workshop on, pages 1?6, Sept 2012.
[21] Catherine Wah, Grant Van Horn, Steve Branson, Subhransu Maji, Pietro Perona, and Serge Belongie.
Similarity comparisons for interactive fine-grained categorization. In CVPR, pages 859?866. IEEE, 2014.
[22] Hannes Heikinheimo and Antti Ukkonen. The crowd-median algorithm. In HCOMP. AAAI, 2013.
[23] A. P. Dawid and A. M. Skene. Maximum Likelihood Estimation of Observer Error-Rates Using the EM
Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20?28, 1979.
[24] Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps.
Social Networks, 5(2):109 ? 137, 1983.
[25] Anne Condon and Richard M. Karp. Algorithms for graph partitioning on the planted partition model.
Random Struct. Algorithms, 18(2):116?140, 2001.
[26] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, pages 849?856. MIT Press, 2001.
[27] Marina Meila. Comparing clusterings?an information based distance. J. Multivar. Anal., 98(5):873?895,
May 2007.
[28] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained
image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on
Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
[29] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset.
Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
9
| 6499 |@word norm:5 km:1 condon:1 simulation:3 bn:1 pick:1 tr:1 klk:1 configuration:19 contains:1 liu:2 karger:2 series:1 daniel:1 hermosillo:1 outperforms:2 err:5 recovered:2 ksk1:1 comparing:1 manuel:1 anne:1 si:4 luis:1 john:1 planet:3 partition:3 shape:1 cheap:1 designed:1 drop:2 v:2 generative:4 intelligence:1 item:30 provides:3 node:5 kepler:1 zhang:1 five:3 become:1 natalie:1 feather:1 inside:10 manner:2 theoretically:1 valizadegan:1 pairwise:2 peng:1 expected:1 pkdd:1 multi:1 inspired:1 resolve:2 lll:23 considering:1 spain:1 underlying:4 mass:1 linda:2 what:3 kind:1 perr:5 unobserved:1 hauskrecht:1 guarantee:3 collecting:3 multidimensional:1 interactive:1 pramod:1 wrong:2 hit:7 platt:1 partitioning:2 grant:1 engineering:2 mach:1 meet:1 ap:1 might:3 plus:1 bird:12 therein:2 studied:1 twice:1 suggests:2 co:1 jurafsky:1 branson:3 limited:1 range:1 averaged:3 unique:2 horn:1 testing:1 rion:1 block:18 empirical:6 significantly:7 pekka:1 nmax:3 get:5 wrongly:1 www:1 equivalent:2 missing:2 tianbao:1 convex:8 amazon:4 recovery:6 simplicity:1 correcting:1 identifying:1 rule:1 sbm:7 dominate:2 cbm:2 fill:4 nuclear:1 oh:2 embedding:1 handle:1 variation:4 justification:2 pt:13 suppose:1 shamir:1 colorado:1 exact:3 homogeneous:2 kathryn:1 designing:2 dawid:2 expensive:3 recognition:3 cut:1 labeled:2 database:1 observed:5 electrical:2 capture:2 region:1 connected:3 decrease:3 rq:10 benjamin:1 ideally:1 babak:2 trained:1 eric:1 triangle:92 various:2 chapter:1 maji:1 norfolk:1 jain:2 fast:1 effective:1 query:153 labeling:4 kevin:2 outside:3 crowd:10 quite:1 widely:2 stanford:1 cvpr:1 otherwise:1 statistic:1 fischer:1 breed:3 noisy:6 wilber:1 took:2 leinhardt:1 denny:1 omer:1 te0:5 intuitive:3 exploiting:1 cluster:52 categorization:3 perfect:1 adam:1 help:2 depending:2 illustrate:1 andrew:2 dengyong:1 measured:1 qt:13 p2:1 recovering:1 auxiliary:1 judge:2 pii:1 indicate:1 differ:1 snow:1 stochastic:4 tbm:2 redrawn:1 human:4 adjacency:17 require:1 fix:2 clustered:2 samet:1 ryan:1 barclay:1 rong:1 hold:5 ground:1 great:1 lm:11 vary:2 smallest:1 a2:1 cub:1 darin:1 estimation:2 label:12 largest:2 vice:1 hope:2 mit:1 rather:1 kalai:1 zhou:2 varying:5 karp:1 q3:1 focus:3 derived:1 june:2 rank:1 likelihood:2 kwak:1 contrast:2 brendan:1 bryson:1 inference:3 dependent:4 pasadena:2 her:1 kc:1 perona:4 provably:1 subhransu:1 overall:1 aforementioned:1 among:1 denoted:7 platform:4 fairly:1 equal:3 once:1 having:1 ng:2 qiang:1 identical:1 yu:1 jon:1 future:1 report:1 intelligent:1 cardinal:1 few:1 richard:1 randomly:5 mll:10 cns:1 ab:2 invariably:1 simpson:1 mcmillen:1 bracket:1 citizen:1 edge:126 closer:2 worker:23 partial:4 ohad:1 filled:13 maurer:1 yuchen:1 pmf:1 re:1 theoretical:4 korlakai:2 column:3 asking:1 vinayak:2 assignment:1 maximization:1 cost:15 vertex:2 subset:7 entry:11 welinder:3 sumit:1 too:2 answer:13 varies:1 combined:1 adaptively:1 density:12 international:2 oymak:1 off:1 michael:4 yao:1 again:1 von:1 aaai:1 choose:1 emnlp:1 dr:1 worse:1 creating:1 expert:9 poodle:1 zhao:1 valadez:1 li:3 toy:1 account:1 jek:1 de:4 summarized:1 coding:1 mlsp:1 forsyth:1 depends:3 vi:9 try:1 jason:1 observer:1 observing:7 crowdsourced:3 elicited:1 capability:1 annotation:2 contribution:1 minimize:1 blackmond:1 sy:1 yield:1 serge:4 wisdom:2 bayesian:1 thumb:1 hunter:2 none:1 expertise:2 submitted:1 hamed:1 whenever:1 rowe:1 turk:4 involved:1 associated:1 proof:1 ihler:1 dataset:12 popular:2 ask:2 crowdclustering:1 color:1 knowledge:2 steve:3 higher:1 supervised:1 response:3 improved:1 wei:1 hannes:1 though:4 strongly:2 furthermore:3 nmin:4 hand:1 sketch:1 web:2 overlapping:1 bouvier:1 quality:1 reveal:6 aj:1 scientific:1 grows:1 laskey:1 true:19 hence:6 assigned:1 regularization:1 ll:11 raykar:1 hiring:1 self:1 samuel:1 criterion:1 complete:1 demonstrate:1 confusion:8 l1:1 pedge:1 image:20 variational:2 consideration:1 novel:1 recently:1 charles:1 superior:1 common:1 quarter:1 empirically:2 conditioning:1 belong:8 extend:2 he:2 versa:1 connor:1 queried:3 ai:3 rd:1 meila:1 similarly:2 language:2 pq:5 access:2 longer:2 ahn:1 similarity:2 etc:3 add:1 catherine:1 inequality:1 yi:2 der:1 caltech:5 seen:3 minimum:3 shaili:1 determine:2 colin:1 signal:1 semi:1 resolving:1 multiple:4 desirable:1 full:1 hcomp:2 technical:1 match:1 multivar:1 luca:1 equally:3 marina:1 a1:1 vision:2 expectation:1 metric:1 arxiv:1 kernel:4 addition:1 want:1 krause:1 fine:3 singular:1 jian:1 median:1 rest:4 jayadevaprakash:1 comment:1 subject:1 jordan:2 yang:1 revealed:9 bernstein:1 embeddings:1 enough:1 split:1 independence:1 florin:1 andreas:1 regarding:1 idea:1 multiclass:1 whether:1 allocate:1 utility:1 penalty:1 moy:1 peter:2 remark:2 clear:1 amount:2 generate:2 kian:1 outperform:1 exist:1 terrier:1 disjoint:3 per:3 correctly:3 discrete:1 write:1 shall:1 milo:1 key:1 blum:1 ce:1 graph:19 pietro:3 fraction:3 sum:5 run:5 respond:2 reasonable:4 p3:1 ob:2 confusing:1 maaten:1 bound:3 followed:2 guaranteed:2 heim:1 encountered:1 sorokin:1 constraint:1 fei:2 n3:1 answered:3 spring:1 skene:2 department:2 smaller:2 across:1 em:2 character:1 sam:1 making:2 outlier:3 pr:32 errorcorrecting:1 taken:3 lml:10 devavrat:2 turn:1 mechanism:1 ostensibly:1 know:1 end:1 available:2 jenkins:1 operation:1 nen:1 observe:2 spectral:19 yair:1 struct:1 coin:14 pji:1 shah:2 rp:8 vikas:1 thomas:1 weinberger:1 denotes:2 clustering:43 ensure:1 include:1 completed:1 charlie:1 heikinheimo:1 giving:2 k1:1 lmj:10 society:2 tensor:1 question:4 strategy:1 concentration:2 rt:27 planted:1 diagonal:3 surrogate:2 distance:3 mapped:1 simulated:1 berlin:1 chris:1 argue:1 considers:2 kk:1 setup:4 robert:1 recaptcha:1 rise:2 anal:1 perform:1 observation:13 snapshot:1 datasets:4 jin:1 november:1 ecml:1 situation:1 ucsd:1 august:1 emprical:1 david:4 pair:4 mechanical:4 required:1 dog:7 extensive:1 security:1 wah:2 california:1 registered:1 barcelona:1 nip:7 able:2 elicitation:1 below:2 pattern:2 tb:22 program:7 jinfeng:1 reliable:8 green:1 royal:1 difficulty:2 natural:2 minimax:2 improve:1 technology:1 ramya:3 lij:1 sept:1 discovery:2 relative:6 ukkonen:1 interesting:1 proportional:1 allocation:1 querying:10 triple:2 sufficient:5 pij:1 sewoong:2 share:3 row:1 course:1 summary:3 supported:1 antti:1 side:1 allow:1 institute:1 fall:1 wide:1 basu:1 absolute:1 sparse:1 tauman:1 van:2 overcome:1 evaluating:1 world:2 made:3 far:2 social:1 approximate:1 keep:1 clique:1 varshney:2 pittsburgh:1 llm:16 belongie:5 gomes:1 xi:1 iterative:1 triplet:4 ehe:1 khosla:1 table:10 nature:1 reasonably:1 learn:1 robust:1 symmetry:1 heidelberg:1 shipeng:1 european:1 blockmodels:1 abraham:1 paul:1 n2:3 allowed:11 fair:1 tamuz:1 hassibi:3 inferring:4 mao:1 candidate:1 answering:2 pe:1 flanders:1 jay:1 grained:3 anil:1 theorem:2 companion:1 cvprw:1 specific:1 evidence:2 normalizing:1 a3:1 workshop:3 joe:1 corr:2 importance:1 te:23 budget:18 chen:1 easier:1 entropy:6 generalizing:1 likely:1 visual:1 bogoni:1 aditya:2 lav:1 bangpeng:1 partially:7 holland:1 springer:1 cite:1 truth:1 relies:1 conditional:8 goal:1 hard:1 except:1 uniformly:3 total:6 specie:6 succeeds:1 meaningful:1 tern:2 support:1 alexander:1 incorporate:1 gerardo:1 crowdsourcing:12 |
6,079 | 65 | 348
Minkowski-r Back-Propaaation: Learnine in Connectionist
Models with Non-Euclidian Error Silllais
Stephen Jose Hanson and David J. Burr
Bell Communications Research
Morristown, New Jersey 07960
Abstract
Many connectionist learning models are implemented using a gradient descent
in a least squares error function of the output and teacher signal. The present model
Fneralizes. in particular. back-propagation [1] by using Minkowski-r power metrics.
For small r's a "city-block" error metric is approximated and for large r's the
"maximum" or "supremum" metric is approached. while for r=2 the standard backpropagation model results. An implementation of Minkowski-r back-propagation is
described. and several experiments are done which show that different values of r
may be desirable for various purposes. Different r values may be appropriate for the
reduction of the effects of outliers (noise). modeling the input space with more
compact clusters. or modeling the statistics of a particular domain more naturally or
in a way that may be more perceptually or psychologically meaningful (e.g. speech or
vision).
1. Introduction
The recent resurgence of connectionist models can be traced to their ability to
do complex modeling of an input domain. It can be shown that neural-like networks
containing a single hidden layer of non-linear activation units can learn to do a
piece-wise linear partitioning of a feature space [2]. One result of such a partitioning
is a complex gradient surface on which decisions about new input stimuli will be
made. The generalization, categorization and clustering propenies of the network are
therefore detennined by this mapping of input stimuli to this gradient swface in the
output space. This gradient swface is a function of the conditional probability
distributions of the output vectors given the input feature vectors as well as a function
of the error relating the teacher signal and output.
f'F'I
an" c.,-i,. ....
T.. ............
.. .,
~.r
01.
349
Presently many of the models have been implemented using least squares error.
In this paper we describe a new model of gradient descent back-propagation [I] using
Minkowski-r power error metrics. For small r's a "city-block" error measure (r=I) is
approximated and for larger r's a "maximum" or supremum error measure is
approached, while the standard case of Euclidian back-propagation is a special case
with 1'*2. Fll"St we derive the general case and then discuss some of the implications
of varying the power in the general metric.
2. Derivation of Minkowski-r Back-propagation
The standard back-propagation is derived by minimizing least squares error as
a function of connection weights within a completely connected layered network.
The error for the Euclidian case is (for a single input-output pair),
E
.. 2
=-21 L. O'j-Yj)
,
(1)
J
where Y is the activation of a unit and y represents an independent teacher signal.
The activation of a unit 0') is typically computed by nonnalizing the input from other
units (x) over the interval (0,1) while compressing the high and low end of this range.
A common function used for this normalization is the logistic,
1
Yj=--1 + e-Xt
(2)
The input to a unit (x) is found by summing products of the weights and
corresponding activations from other units,
(3)
where Yle represents units in the fan in of unit i and
connection between unit i and unit h.
Whi
represents the strength of the
A gradient for the Euclidian or standard back-propagation case could be found
by finding the partial of the error with respect to each weight, and can be expressed in
this three tenn differential,
350
dE
dE dyi dX;
.
--dw/ti
dyi ax; aw.,
(4)
which from the equations before turns out to be,
(5)
Generalizing the error for Minkowski-r power metrics (see Figure 1 for the
family of curves),
E
=-r1 L.
. . I )'
I (Yi - Yi
(6)
?
..
~
~
:
I:
C'f
0
?
.eo
?
?20
0
ao
...
10
NfIII
Figure 1: Minkowski-r Family
Using equations 24 above with equation 6 we can easily find an expression for the
gradient in the general Minkowski-r case,
dE = ( IYi - Yi)
. . I ,-1 1 )
(y
. .. )
Yi( -Yi ",.sgn i - Yi
~
aw,.;
(7)
This gradient is used in the weight update rule proposed by Rumelhart, Hinton and
Williams [1],
351
whi(n+l)
=
dE
(X-
dWAi
+ wAi(n)
(8)
Since the gradient computed for the hidden layer is a function of the gradient for the
output, the hidden layer weight updating proceeds in the same way as in the
Euclidian case [1], simply substituting this new Minkowski-r gradient.
It is also possible to define a gradient over r such that a minimum in error
would be sought. Such a gradient was suggested by White [3, see also 4] for
maximum likelihood estimation of r, and can be shown to be,
dIO?E) = (1-1Ir)(1Ir) + (llr)2/og (r) + (lIr) 2",(1lr) + (1/r) 21Yi-Yi 1
-(1/r)(IYi -Yil)'/og(IYi -Yi I)
(9)
An approximation of this gradient (using the last term of equation 9) has been
implemented and investigated for simple problems and shown to be fairly robust in
recovering similar r values. However, it is important that the r update rule changes
slower than the weight update rule. In the simulations we ran r was changed once for
every 10 times the weight values were changed. This rate might be expected to vary
with the problem and rate of convergence. Local minima may be expected in larger
problems while seeking an optimal r. It may be more infonnative for the moment to
examine different classes of problems with fixed r and consider the specific rationale
for those classes of problems.
3. Variations in r
Various r values may be useful for various aspects of representing infonnation
in the feature domain. Changing r basically results in a reweighting of errors from
output bits l . Small r's give less weight for large deviations and tend to reduce the
influence of outlier points in the feature space during learning. In fact, it can be
shown that if the distributions of feature vectors are non-gaussian, then the r=2 case
1. It is possible to entcltain r values that are negative, which would give largest weight to small errors
close to zero and smallest weight to very large emn. Values of r lea than 1 generally are non-metric.
i.e. they viola1e 81ieast one of the meuic axioms. For example. r<O violates the triangle inequality.
Fa' aome problems this may make sense and the need for a metric em:r weighting may be unnecessary.
These issues are not explored in this paper.
352
will not be a maximum likelihood estimator of the weights [5]. The city block case,
r=1, in fact, arises if the underlying conditional probability distributions are Laplace
[5]. More generally. r's less than two will tend to model non~gaussian distributions
where the tails of the distributions are more pronounced than in the gaussian. Better
estimators can be shown to exist for general noise reduction and have been studied in
the area of robust estimation procedures [5] of which the Minkowski-r metric is only
one possible case to consider.
r<2. It is generally recommended that 1'=1.5 may be optimal for many noise
reduction problems [6]. However, noise reduction may also be expected to vary with
the problem and nature of the noise. One example we have looked at involves the
recovery of an arbitrary 3 dimensional smooth surface as shown in Figure 2a, after
the addition of random noise. This surface was generated from a gaussian curve in the
2 dimensions. Uniform random noise equal to the width (standard deviation) of the
surface shape was added point-wise to the surface producing the noise plus surface
shape shown in Figure 2b.
b
Figure 2: Shape surface (2a), Shape plus noise surface (2b) and recovered Shape
sUrface (2c)
The shape in Figure 2a was used as target points for Minkowski-r back~propagation2
and recovered with some distortion of the slope of the shape near the peak of the
2. All simulation runs, unless otherwise stated, used the same learning rate (.05) and smoothing value (.9)
and stopping critmon defined in tenns of absolute mean deviation. The number of iterations to meet
the stopping criterion varied considerably as r was changed (see below).
353
surface (see Fiaure 2c). Next the noise plus shape surface was used as target points
for the learning procedure with r=2. The shape shown in Figure 3a was recovered,
however. with considerable distortion iaround the base and peak. The value of r was
reduced to 1.5 (Figure 3b) and then finally to 1.2 (Figure 3c) before shape distortions
were eliminated. Although, the major properties of the shape of the surface were
recovered. the scale seems distorted (however, easily restored with renormalization
into the 0.1 range).
Figure 3: Shape surface recovered with r=2 (3a), r=1.5 (3b) and r=1.2 (3c)
r>2. Large r's tend to weight large deviations. When noise is not possible in
the feature space (as in an arbitrary boolean problem) or where the token clusters are
compact and isolated tllen simpler (in the sense of the number and placement of
partition planes) genenuization surfaces may be created with larger r values. For
example, in the simple XOR problem, the main effect of increasing r is to pull the
decision boundaries closer into the non-zero targets (compare high activation regions
in Figure 4a and 4b).
In this particular problem clearly such compression of the target regions does not
constitute simpler decision surfaces. However, if more hidden units are used than are
needed for pattern class separation, then increasing r during training will tend to
reduce the number of cuts in the space to the minimum needed. This seems to be
primarily due to the sensitivity of the hyper-plane placement in the feature space to
the geometry of the targets.
A more complex case illustrating the same idea comes from an example
suggested by Minsky & Papen [7] called "the mesh". This type of pattern
recognition problem is also. like XOR, a non-linearly separable problem. An optimal
354
Figure 4: XOR solved with r=2 (4a) and r=4 (4b)
solution involves only three cuts in feature space
cluSten (see Figure Sa).
to
separate the two "meshed"
f14W'" 1
b
Figure 5: Mesh problem with minimwn cut solution (5a) and Performance Surface(5b)
Typical solutions for r=2 in this case tend to use a large number of hidden units to
separate the two sets of exemplars (see Figure 5b for a perfonnance surface). For
example t in Figure 6a notice that a typical (based on several runs) Euclidian backprop starting with 16 hidden units has found a solution involving five decision
boundaries (lines shown in the plane also representing hidden units) while the r=3
case used primarily three decision boundaries and placed a number of other
355
boundaries redundantly near the center of the meshed region (see Figure 6b) where
there is maximum uncertainty about the cluster identification.
-
-
~
~
ID
~
0
0
..
ID
0
0
?
?0
C'lI
0
0
0
~
0
0
C'lI
0
0.0
0.2
G.4
0.8
0.8
1.0
b
0.0
0.2
0.4
0.8
0.8
1.0
Figure 6: Mesh solved with r=2 (6a) and r=3 (6b)
Speech Recognition. A final case in which large r's may be appropriate is data
that has been previously processed with a transformation that produced compact
regions requiring separation in the feature space. One example we have looked at
involves spoken digit recognition. The first 10 cepstral coefficients of spoken digits
("one" through "ten") were used for input to a network. In this case an advantage is
shown for larger r's with smaller training set sizes. Shown in Figure 7 are transfer
data for 50 spoken digits replicated in ten different runs per point (bars show standard
error of the mean). Transfer shows a training set size effect for both r=2 and r=3,
however for the larger r value at smaller training set sizes (10 and 20) note that
transfer is enhanced.
We speculate that this may be due to the larger r backprop creating discrimination
regions that are better able to capture the compactness of the clusters inherent in a
small number of training points.
4. Conver&ence Properties
It should be generally noted that as r increases. convergence time tends to grow
roughly linearly (although this may be problem dependent). Consequently,
decreasing r can significantly improve convergence, without much change to the
nature of solution. Further, if noise is present decreasing r may reduce it
dramatically. Note finally that the gradient for the Minkowski-r back-propagation is
nonlinear and therefore more complex for implementing learning procedures.
356
...---... ....... ----.-- -..-- - -- - 1..-
8
0
~c
co
~
?c
co
...!
0
:J
~
0
u
0~
t------l--:::~+-?/'-
!:
:
/
0
C\I
0
t?
R=2 ,.-
Q
". .
i1
!
I
0
~
~
~ .-
i
1
0
~
I
~ '~- '. ~
:'
! ! T.:' 10 replications of 50 transfer POints
: i' I :
I /... .?.C/
L____U.____
----..
o
10
20
30
40
50
TRAINING SET SIZE
Figure 7: Digit Recognition Set Size Effect
5. Summary and Conclusion
A new procedure which is a variation on the Back-propagation algorithm is
derived and simulated in a number of different problem domains. Noise in the target
domain may be reduced by using power values less than 2 and the sensitivity of
partition planes to the geometry of the problem may be increased with increasing
power values. Other types of objective functions should be explored for their
potential consequences on network resources and ensuing pattern recognition
capabilities.
References
1. Rumelhart D. E., Hinton G. E., Williams R., Learning Internal Representations by
error propagation. Nature. 1986.
2. Burr D. I. and Hanson S. I .? Knowledge Representation in Connectionist Networks.
Bellcore. Technical Report,
3. White. H. Personal Communication. 1987.
4. White, H. Some Asymptotic Results for Learning in Single Hidden Layer
Feedforward Network Models. Unpublished Manuscript. 1987.
357
S. Mosteller, F. & Tukey, 1. Robust Estimation Procedures, Addison Wesley, 1980.
6. Tukey, 1. Personal Communication, 1987.
7. Minsky, M. & Papert, S., Perceptrons: An Introduction to Computational
Geometry, MIT Press, 1969.
| 65 |@word illustrating:1 compression:1 seems:2 simulation:2 euclidian:6 moment:1 reduction:4 emn:1 recovered:5 activation:5 dx:1 mesh:3 partition:2 shape:12 update:3 discrimination:1 tenn:1 plane:4 lr:1 simpler:2 five:1 differential:1 replication:1 burr:2 expected:3 roughly:1 examine:1 decreasing:2 increasing:3 underlying:1 redundantly:1 spoken:3 finding:1 transformation:1 every:1 ti:1 morristown:1 partitioning:2 unit:14 producing:1 before:2 local:1 tends:1 consequence:1 id:2 meet:1 might:1 plus:3 studied:1 co:2 range:2 yj:2 block:3 backpropagation:1 digit:4 procedure:5 area:1 axiom:1 bell:1 significantly:1 close:1 layered:1 influence:1 center:1 williams:2 starting:1 recovery:1 rule:3 estimator:2 pull:1 dw:1 variation:2 laplace:1 target:6 enhanced:1 rumelhart:2 approximated:2 recognition:5 updating:1 cut:3 solved:2 capture:1 region:5 compressing:1 connected:1 ran:1 personal:2 completely:1 triangle:1 conver:1 easily:2 jersey:1 various:3 derivation:1 describe:1 approached:2 hyper:1 whi:2 larger:6 distortion:3 otherwise:1 ability:1 statistic:1 final:1 advantage:1 yle:1 product:1 detennined:1 pronounced:1 convergence:3 cluster:4 r1:1 categorization:1 derive:1 exemplar:1 sa:1 implemented:3 recovering:1 involves:3 come:1 sgn:1 violates:1 implementing:1 backprop:2 dio:1 ao:1 generalization:1 mapping:1 substituting:1 major:1 sought:1 vary:2 smallest:1 purpose:1 estimation:3 infonnation:1 largest:1 city:3 mit:1 clearly:1 gaussian:4 varying:1 og:2 derived:2 ax:1 likelihood:2 sense:2 dependent:1 stopping:2 typically:1 compactness:1 hidden:8 i1:1 issue:1 bellcore:1 smoothing:1 special:1 fairly:1 equal:1 once:1 eliminated:1 represents:3 connectionist:4 stimulus:2 report:1 inherent:1 primarily:2 geometry:3 minsky:2 llr:1 dyi:2 implication:1 closer:1 partial:1 perfonnance:1 unless:1 isolated:1 increased:1 modeling:3 boolean:1 ence:1 infonnative:1 deviation:4 uniform:1 teacher:3 aw:2 considerably:1 st:1 peak:2 sensitivity:2 mosteller:1 containing:1 creating:1 li:2 potential:1 de:4 speculate:1 coefficient:1 piece:1 tukey:2 capability:1 slope:1 square:3 ir:2 xor:3 identification:1 produced:1 basically:1 wai:1 naturally:1 knowledge:1 back:11 manuscript:1 wesley:1 done:1 nonlinear:1 reweighting:1 propagation:10 logistic:1 effect:4 requiring:1 white:3 during:2 width:1 noted:1 criterion:1 wise:2 common:1 yil:1 tail:1 relating:1 iyi:3 surface:17 base:1 nonnalizing:1 recent:1 inequality:1 tenns:1 yi:9 minimum:3 eo:1 recommended:1 signal:3 stephen:1 propenies:1 desirable:1 smooth:1 technical:1 involving:1 vision:1 metric:9 psychologically:1 normalization:1 iteration:1 lea:1 addition:1 interval:1 grow:1 tend:5 near:2 feedforward:1 reduce:3 idea:1 expression:1 speech:2 constitute:1 dramatically:1 useful:1 generally:4 ten:2 clusten:1 processed:1 reduced:2 exist:1 notice:1 per:1 traced:1 changing:1 run:3 jose:1 uncertainty:1 distorted:1 family:2 separation:2 decision:5 bit:1 layer:4 fll:1 fan:1 strength:1 placement:2 aspect:1 minkowski:12 separable:1 smaller:2 em:1 presently:1 outlier:2 equation:4 resource:1 previously:1 discus:1 turn:1 needed:2 addison:1 end:1 appropriate:2 slower:1 clustering:1 seeking:1 objective:1 added:1 looked:2 restored:1 fa:1 gradient:15 separate:2 simulated:1 ensuing:1 minimizing:1 negative:1 resurgence:1 stated:1 implementation:1 descent:2 meshed:2 hinton:2 communication:3 varied:1 arbitrary:2 david:1 pair:1 unpublished:1 connection:2 hanson:2 able:1 suggested:2 proceeds:1 below:1 pattern:3 bar:1 power:6 representing:2 improve:1 created:1 asymptotic:1 rationale:1 changed:3 token:1 placed:1 last:1 summary:1 cepstral:1 absolute:1 curve:2 dimension:1 boundary:4 made:1 replicated:1 compact:3 supremum:2 lir:1 summing:1 unnecessary:1 learn:1 nature:3 robust:3 transfer:4 investigated:1 complex:4 domain:5 main:1 linearly:2 noise:13 renormalization:1 papert:1 weighting:1 xt:1 specific:1 explored:2 perceptually:1 generalizing:1 simply:1 expressed:1 conditional:2 consequently:1 considerable:1 change:2 typical:2 called:1 meaningful:1 perceptrons:1 internal:1 arises:1 |
6,080 | 650 | Diffusion Approximations for the
Constant Learning Rate
Backpropagation Algorithm and
Resistence to Local Minima
William Finnoff
Siemens AG, Corporate Research and Development
Otto-Hahn-Ring 6
8000 Munich 83, Fed. Rep. Germany
Abstract
In this paper we discuss the asymptotic properties of the most commonly used variant of the backpropagation algorithm in which network weights are trained by means of a local gradient descent on examples drawn randomly from a fixed training set, and the learning
rate TJ of the gradient updates is held constant (simple backpropagation). Using stochastic approximation results, we show that for
TJ ~ 0 this training process approaches a batch training and provide results on the rate of convergence. Further, we show that for
small TJ one can approximate simple back propagation by the sum
of a batch training process and a Gaussian diffusion which is the
unique solution to a linear stochastic differential equation. Using
this approximation we indicate the reasons why simple backpropagation is less likely to get stuck in local minima than the batch
training process and demonstrate this empirically on a number of
examples.
1
INTRODUCTION
The original (simple) backpropagation algorithm, incorporating pattern for pattern
learning and a constant learning rate 'T} E (0,00), remains in spite of many real (and
459
460
Finnoff
imagined) deficiencies the most widely used network training algorithm, and a vast
body of literature documents its general applicability and robustness. In this paper
we will draw on the highly developed literature of stochastic approximation theory to demonstrate several asymptotic properties of simple backpropagation. The
close relationship between backpropagation and stochastic approximation methods
has been long recognized, and various properties of the algorithm for the case of
decreasing learning rate 7]n+l < 7]n, n E N were shown for example by White
[W,89a], [W,89b] and Darken and Moody [D,M,91]. Hornik and Kuan [H,K,91]
used comparable results for the algorithm with constant learning rate to derive
weak convergence results.
In the first part of this paper we will show that simple backpropagation has the
same asymptotic dynamics as batch training in the small learning rate limit. As
such, anything that can be expected of batch training can also be expected in simple
backpropagation as long as the learning rate of the algorithm is very small. In the
special situation considered here (in contrast to that in [H,K,91]) we will also be
able to provide a result on the speed of convergence. As such, anything that can be
expected of batch training can also be expected in simple backpropagation as long
as the learning rate of the algorithm is very small. In the next part of the paper,
Gaussian approximations for the difference between the actual training process and
the limit are derived. It is shown that this difference, (properly renormalized), converges to the solution of a linear stochastic differential equation. In the final section
of the paper, we combine these results to provide an approximation for the simple
back propagation training process and use this to show why simple backpropagation
will be less inclined to get stuck in local minima than batch training. This ability
to avoid local minima is then demonstrated empirically on several examples.
2
NOTATION
Define the. parametric version of a single hidden layer network activation function
with h inputs, m outputs and q hidden units
I: Rd x Rh
-+
Rm, (0, x)
-+
(lUJ, x), ... , reO, x?,
by setting for x E Rh, Z = (Xl, ... , Xh, I), 0 = (i:, /3:) and u = 1, ... , m,
reo, x) =rA?i:, (3:), x) =t/J
(t,=1
i'Jt/J({3jzT) + i:+l) ,
where xT denotes the transpose of x and d = m(q + 1) + q(h + 1) denotes the
number of weights in the network. Let ?Yk, Xk?k=l, ... ,T be a set of training examples, consisting of targets (Yk)k=l, ... ,T and inputs (Xk)k=l, ... ,T. We then define the
parametric error function
U(y, x, 0) =
Ily -
and for every 0, the cummulative gradient
1(0, x)II 2 ,.
Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm
3
APPROXIMATION WITH THE ODE
We will be considering the asymptotic properties of network training processes
induced by the starting value 80 , the gradient (or direction) function
the
learning rate TJ and an infinite training sequence (Yn, xn)neN, where each (Yn, xn)
example is drawn at random from the set {(Y1,X1), ... ,(YT,XT)}. One defines the
discrete parameter process 8 8" (8~)neZ+ of weight updates by setting
-W-
= =
(J7J
n
=
{ (Jo
(J~_l
-
TJ~(Yn' X n , 8~_d
for n = 0
for n E N
and the corresponding continuous parameter process ?(J" (t?te[o,oo), by setting
for t E [en - l)TJ, nTJ), n E N. The first question that we will investigate is that
of the 'small learning rate limit' of the continuous parameter process 8", i.e. the
limiting properties of the family 8" for TJ -+ O. We show that the family of (stochastic) processes ?(J")7J>o converges with probability one to a limit process 8, where 8
denotes the solution to the cummulative gradient equation,
8(t)
= (Jo + it h(8(s?ds.
=
Here, for (Jo a = constant, this solution is deterministic. This result corresponds
to a 'law of large numbers' for the weight update process, in which the small learning
rate (in the limit) averages out the stochastic fluctuations.
Central to any application of the stochastic approximation results is the derivation
of local Lipschitz and linear growth bounds for Wand h. That is the subject of
the following,
Lemma(3.1) i) There exists a constant K > 0 so that
sup
(1/ ,z)
II ~~ (y, x, (J)I < K(l + 11(11)
and
IIh(8)1I ~ K(l
+ 11(11).
ii) For every G > 0 there exists a constant La so that for any 8, '9 E [-G, Gl d ,
461
462
Finnoff
-II
I
'"
au x, 0) - ao(Y'
au x, 0) ~ LGIlO - Oil
sup l 8o(Y,
(y,::)
and
IIh(8) - h(O)11 ~ LGIIO -
811.
Proof: The calculations on which this result are based are tedious but straightforward, making repeated use of the fact that products and sums of locally Lipschitz
continuous functions are themselves locally Lipschitz continuous. It is even possible
to provide explicit values for the constants given above.
?
Denoting with P (resp. E) the probability (resp. mathematical expectation) of the
processes defined above, we can present the results on the probability of deviations
of the process 0 from the limit
e.
Theorem(3.2) Let r,6 E (0, (0). Then there exists a constant Br (which
doesn't depend on 71) so that
ii)P (suP,sr IIO(s) - O(s)11 > 6)
< bBr71.
Proof: The first part of the proof requires that one finds bounds for 0'1 (t) and O(t)
for t E [0, r]. This is accomplished using the results of Lemma(3.l) and Gronwall's
Lemma. This places 71 independent bounds on B r . The remainder of the proof uses
Theorem(9), ?1.5, Part II of [Ben,Met,Pri,87]. The required conditions (AI), (A2)
follow directly from our hypotheses, and (A3), (A4) from Lemma(3.l). Due to the
boundedness of the variables (Yn, xn)neN and 0o, condition (A5) is trivially fulfilled .
?
It should be noted that the constant Br is usually dependent on r and may indeed
increase exponentially (in r) unless it is possible to show that the training process
remains in some bounded region for t -- 00. This is not necessarily due exclusively
to the difference between the stochastic approximation and the discrete parameter
cummulative gradient process, but also to the the error between the discrete (Euler
approximation) and continuous parameter versions of (3.3).
4
GAUSSIAN APPROXIMATIONS
In this section we will give a Gaussian approximation for the difference between
the training process 8" and the limit O. Although in the limit these coincide, for
71 > the training process fluctuates away from the limit in a stochastic fashion.
The following Gaussian approximation provides an estimate for the size and nature
?
Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm
of these fluctuations depending on the second order statistics (variance/covariance
matrix) of the weight update process. Define for any t E [0,00),
8'1(t) = O'1(t) - O(t) .
..ft
=
Further, for i 1, ... , d we denote with ~~ i (y, x, 0), (resp. hi (6)) the i-th coordinate
vector of ~(y,x,O) (resp. h(O)). Then define for i,j = I, ... ,d, 6 E Rd
Thus, for any n EN, 6 E R d, R( 0) represents the covariance matrix of the random
elements ~(Yn, Xn, 6). We can then define for the symmetric matrix R(6) a further
Rdxd valued matrix Ri(6) with the property that R(6) = Ri(6)(R!(6))T.
The following result represents a central limit theorem for the training process. This
permits a type of second order approximation of the fluctuations of the stochastic
training process around its deterministic limit.
Theorem( 4.1): Under the assumptions given above, the distributions of the
processes 8'1, TJ > 0, converge weakly (in the sense of weak convergence of measures)
for TJ ~ 0 to a uniquely defined measure C{O), where '8 denotes the solution to the
following stochastic differential equation
where W denotes a standard d-dimensional Brownian motion (i.e. with covariance
matrix equal to the identity matrix).
Proof: The proof here uses Theorem(7), ?4.4, Part II of [Ben,Met,Pri,87]. As
noted in the proof of Theorem(3.2), under our hypotheses, the conditions (Al)(A5) are fulfilled. Define for i,j
1, ... ,d, (y,x) E Im+h, 0 E Rd, w ij (y,x,6)
pi(y, x, O)pi (y, x, O)-hi(O)hj (0), and 11 = p. Under our hypotheses, h has ~~>ntinuous
first and second order derivatives for all 0 E Rd and the function R (R?'ki=l, ... ,d
as well as W = (Wij)i,;=l, .. .,d fulfill the remaining requirements of (AS) as follows:
(A8)i) and (A8)ii) are trivial consequence of the definition of Rand W. Finally,
setting Pa
P4
0 and JJ
1, (AS)iii) then can be derived directly from the
definitions of Wand Rand Lemma(5.1)ii).
=
=
=
= =
=
?
5
RESISTENCE TO LOCAL MINIMA
In this section we combine the results of the two preceding sections to provide
a Gaussian approximation of simple backpropagation. Recalling the results and
463
464
Finnoff
notation of Theorem(3.2) and Theorem(4.1) we have for any t E [0,(0),
-
(J'1(t) = 8(t)
1-
1
+ 7]~(J(t) + 0(7]1).
Using this approximation we have:
-For 'very small' learning rate 7], simple backpropagation and batch learning will
produce essentially the same results since the stochastic portion of the process
(controlled by 7]~) will be negligible.
-Otherwise, there is a non negligible stochastic eleE1ent in the training process which
can be approximated by the Gaussian diffusion (J.
-This diffusion term gives simple backpropagation a 'quasi-annealing' character, in
which the cummulative gradient is continuously perturbed by the Gaussian term 8,
allowing it to escape local minima with small shallow basins of attraction.
It should be noted that the rest term will actually have a better convergence rate
than the indicated 0(7]~). The calculation of exact rates, though, would require a
generalized version of the Berry-Esseen theorem. To our knowledge, no such results
are available which would be applicable to the situation described above.
6
EMPIRICAL RESULTS
The imperviousness of simple backpropagation to local minima, which is part of
neural network 'folklore' is documented here in four examples. A single hidden
layer feedforward network with 4J = tanh, ten hidden units n and one output was
trained with both simple backpropagation and batch training using data generated by four different models. The data consisted of pairs (Yi, :c,), i = 1, ,.. , T,
TEN with targets Yi E R and inputs Xi = (:ci, .. " xf) E [-I,l)K, where
Yi =
+ Ui, for j, KEN. The first experiment was based on
an additive structure 9 having the following form with j = 5 and K = 10,
= L::~=1 sin(okx:), ok E R, The second model had a product struc3, K
10 and
n!=1 ok E R , The third structure 9 with j
ture considered was constructed with j = 5 and K = 10, using sums of Radial Basis
"
1
5
8
I
(5
( a ?? I-~~
FunctIOns (RBF s) as follows:
xi? = E,=1 (-1) exp Ek=l - 2(12 '- ?
These points were chosen by independent drawings from a uniform distribution on
[-1,1)5, The final experiment was conducted using data generated by a feedforward
network activation function. For more details concerning the construction of the
examples used here consult [F,H,Z,92].
g?xl, .. "x1?
g?xt, .. "x;?
= =
g?xt, ... ,x:? =
g?xi' .'"
xf,
)2)
For each model three training runs were made using the same vector of starting
weights for both simple backpropagation and batch training. As can be seen, in all
but one example the batch training process got stuck in a local minimum producing
much worse results than those found using simple backpropagation, Due to the
wide array of structures used to generate data and the number of data sets used, it
would be hard to dismiss the observed phenomena as being example dependent.
Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm
Net
Error x 10-3
simple BP
?Sitch?C???????
800.00 .-tl-....-......:.
.. ..,.~--
....
\.\
~.OO~~~~~~~~???~?..~???~
.. ?~?~
.. ?~???=???=??~----------------?????:.:r:::=:::..:::.:-;::=;.~:.:-:.::::::::.::~.::::::::.-:::;
400.00
--+----=~~~.~.
i ;;:;=~~~;
!OO.OO
0.00
Epochs
300.00
200.00
Product Mapping
Error x 10- 3
:
............. ......
:
.....
800.00
~;
..... .
~lmpie
~. . . . . . . . . . . .. ".............
_
. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
................... I .
.
.. .
... ..
-?-!I-L--..:::::::??
. . :::r+;;:::..=
...j..:=. . =.::=::=:::~::\-ij~:
.
:.-::-:.:-::.-.::-::.-.::-::.-.::-::.-.:'.-::'-.::~:::::'.:::'.:::::'.:
I
6CXl.OO ---~--
0.1'1)
3P
3ate?iiI:?? .. ??
-.+-_ .--:oo,C()
200. "()
Sums
:0000
~f RBF's
Error x : 0'"
SOO.110 . '-'=;::::::::::: ::::~.:.:-:- '
...., ............ , ???? ??
SOiJ,GO
~
- ---- - - - --- -- - --- -- -- -_. . -- 51mple :3P
:;~;;;~;::::';':';':':';':';':':';':';';;';':':';':':':':':';';':';';';'!:';';';';';';':':':':';';';';';';':';';';':';';';';';';';':';'; :
'3atC'n'L::'" ...
... .- --- -- - . -
.J.C{).OO ------ .-------=~~;;;::~-~--=-
0.00
ifX).OO
~OO,OO
:00.00
Sums of sin's
Error x 10-.)
~lmpie
~)().00~~~~r------------------------------
SP
?3ate?i{?:::..?...
. . .~r-
400.00 -t-------~~~~
-:!pocns
0.00
100.00
200.00
465
466
Finnoff
7
REFERENCES
[Ben,Met,Pri,87] Benveniste, A., Metivier, M., Priouret, P., Adaptive Algorithms
and Stochastic Approximations, Springer Verlag, 1987.
[Bou,85] Bouton C., Approximation Gaussienne d'algorithmes stochastiques a dynamique Markovienne. Thesis, Paris VI, (in French), 1985.
[Da,M,91] Darken C. and Moody J., Note on learning rate schedules for stochastic
optimization, inAdvances in Neural Information Processing Systems 3, Lippmann, R. Moody, J., and Touretzky, D., ed., Morgan Kaufmann, San Mateo,
1991.
[F ,H,Z,92] Improving model selection by nonconvergent methods. To appear in
Neural Networks.
[H,K,91], Hornik, K. and Kuan, C.M., Convergence of Learning Algorithms with
constant learning rates, IEEE Trans. on Neural Networks 2, pp. 484-489,
(1991).
[Wh,89a] White, H., Some asymptotic results for learning in single hidden-layer
feedforward network models, Jour. Amer. Stat. Ass. 84, no. 408, p. 10031013, 1989.
[W,89b] White, H., Learning in artificial neural networks: A statistical perspective,
Neural Computation 1, p.425-464, 1989.
| 650 |@word version:3 tedious:1 covariance:3 boundedness:1 exclusively:1 denoting:1 document:1 activation:2 additive:1 update:4 xk:2 ifx:1 provides:1 mathematical:1 constructed:1 differential:3 combine:2 indeed:1 expected:4 ra:1 themselves:1 decreasing:1 actual:1 considering:1 notation:2 bounded:1 developed:1 ag:1 every:2 growth:1 rm:1 unit:2 yn:5 producing:1 appear:1 negligible:2 local:10 limit:11 consequence:1 fluctuation:3 au:2 mateo:1 unique:1 backpropagation:21 empirical:1 got:1 radial:1 spite:1 get:2 close:1 selection:1 deterministic:2 demonstrated:1 yt:1 straightforward:1 go:1 starting:2 bou:1 attraction:1 array:1 coordinate:1 limiting:1 resp:4 target:2 construction:1 exact:1 us:2 hypothesis:3 pa:1 element:1 approximated:1 observed:1 ft:1 region:1 inclined:1 yk:2 ui:1 dynamic:1 renormalized:1 metivier:1 trained:2 depend:1 weakly:1 basis:1 various:1 derivation:1 artificial:1 fluctuates:1 widely:1 valued:1 drawing:1 otherwise:1 otto:1 ability:1 statistic:1 kuan:2 final:2 sequence:1 net:1 product:3 remainder:1 p4:1 convergence:6 requirement:1 produce:1 ring:1 converges:2 ben:3 derive:1 oo:10 depending:1 stat:1 ij:2 indicate:1 met:3 direction:1 iio:1 stochastic:16 require:1 ao:1 im:1 around:1 considered:2 exp:1 mapping:1 a2:1 applicable:1 tanh:1 gaussian:8 fulfill:1 avoid:1 hj:1 derived:2 properly:1 ily:1 contrast:1 sense:1 dependent:2 hidden:5 wij:1 quasi:1 germany:1 development:1 special:1 equal:1 having:1 represents:2 escape:1 randomly:1 consisting:1 william:1 recalling:1 a5:2 highly:1 investigate:1 tj:9 held:1 unless:1 applicability:1 deviation:1 euler:1 uniform:1 conducted:1 perturbed:1 jour:1 continuously:1 moody:3 jo:3 thesis:1 central:2 worse:1 ek:1 derivative:1 vi:1 sup:3 portion:1 variance:1 kaufmann:1 weak:2 touretzky:1 ed:1 definition:2 pp:1 proof:7 finnoff:5 wh:1 knowledge:1 schedule:1 actually:1 back:2 ok:2 follow:1 rand:2 amer:1 though:1 d:1 ntj:1 dismiss:1 propagation:2 french:1 defines:1 indicated:1 oil:1 consisted:1 symmetric:1 pri:3 white:3 sin:2 uniquely:1 noted:3 anything:2 generalized:1 demonstrate:2 motion:1 empirically:2 exponentially:1 imagined:1 ai:1 rd:4 trivially:1 had:1 brownian:1 perspective:1 verlag:1 rep:1 accomplished:1 yi:3 seen:1 minimum:8 morgan:1 preceding:1 recognized:1 converge:1 ii:9 corporate:1 xf:2 calculation:2 long:3 concerning:1 controlled:1 variant:1 luj:1 essentially:1 expectation:1 esseen:1 ode:1 annealing:1 resistence:2 rest:1 sr:1 cummulative:4 induced:1 subject:1 consult:1 feedforward:3 iii:2 ture:1 br:2 jj:1 locally:2 ten:2 ken:1 documented:1 generate:1 fulfilled:2 discrete:3 four:2 drawn:2 diffusion:7 vast:1 sum:5 wand:2 run:1 place:1 family:2 draw:1 comparable:1 layer:3 bound:3 hi:2 ki:1 deficiency:1 bp:1 ri:2 speed:1 munich:1 ate:2 character:1 shallow:1 making:1 equation:4 remains:2 discus:1 fed:1 available:1 permit:1 nen:2 away:1 batch:11 robustness:1 original:1 denotes:5 remaining:1 a4:1 mple:1 folklore:1 atc:1 hahn:1 question:1 parametric:2 gradient:7 trivial:1 reason:1 relationship:1 priouret:1 allowing:1 darken:2 descent:1 situation:2 y1:1 pair:1 required:1 iih:2 paris:1 trans:1 able:1 usually:1 pattern:2 reo:2 soo:1 epoch:1 literature:2 berry:1 asymptotic:5 law:1 basin:1 benveniste:1 pi:2 gl:1 transpose:1 wide:1 xn:4 doesn:1 stochastiques:1 stuck:3 commonly:1 made:1 coincide:1 adaptive:1 san:1 approximate:1 lippmann:1 xi:3 continuous:5 cxl:1 why:2 nature:1 hornik:2 improving:1 as:1 necessarily:1 da:1 sp:1 rh:2 repeated:1 body:1 x1:2 en:2 tl:1 fashion:1 explicit:1 xh:1 xl:2 third:1 theorem:9 xt:4 nonconvergent:1 jt:1 algorithmes:1 a3:1 incorporating:1 exists:3 ci:1 te:1 nez:1 likely:1 springer:1 corresponds:1 a8:2 identity:1 rbf:2 lipschitz:3 hard:1 infinite:1 lemma:5 la:1 siemens:1 bouton:1 phenomenon:1 |
6,081 | 6,500 | Linear Relaxations for Finding Diverse Elements in
Metric Spaces
Aditya Bhaskara
University of Utah
bhaskara@cs.utah.edu
Mehrdad Ghadiri
Sharif University of Technology
ghadiri@ce.sharif.edu
Vahab Mirrokni
Google Research
mirrokni@google.com
Ola Svensson
EPFL
ola.svensson@epfl.ch
Abstract
Choosing a diverse subset of a large collection of points in a metric space is a fundamental problem, with applications in feature selection, recommender systems,
web search, data summarization, etc. Various notions of diversity have been proposed, tailored to different applications. The general algorithmic goal is to ?nd
a subset of points that maximize diversity, while obeying a cardinality (or more
generally, matroid) constraint. The goal of this paper is to develop a novel linear
programming (LP) framework that allows us to design approximation algorithms
for such problems. We study an objective known as sum-min diversity, which
is known to be effective in many applications, and give the ?rst constant factor
approximation algorithm. Our LP framework allows us to easily incorporate additional constraints, as well as secondary objectives. We also prove a hardness result
for two natural diversity objectives, under the so-called planted clique assumption.
Finally, we study the empirical performance of our algorithm on several standard
datasets. We ?rst study the approximation quality of the algorithm by comparing
with the LP objective. Then, we compare the quality of the solutions produced by
our method with other popular diversity maximization algorithms.
1
Introduction
Computing a concise, yet diverse and representative subset of a large collection of elements is a
central problem in many areas. In machine learning, it has been used for feature selection [23],
and in recommender systems [24]. There are also several data mining applications, such as web
search [21, 20], news aggregation [2], etc. Diversity maximization has also found applications
in drug discovery, where the goal is to choose a small and diverse subset of a large collection of
compounds to use for testing [16].
A general way to formalize the problem is as follows: we are given a set of objects in a metric
space, and the goal is to ?nd a subset of them of a prescribed size so as to maximize some measure
of diversity (a function of the distances between the chosen points). One well studied example of
a diversity measure is the minimum pairwise distance between the selected points ? the larger it is,
the more ?mutually separated? the chosen points are. This, as well as other diversity measures have
been studied in the literature [11, 10, 6, 23], including those based on mutual information and linear
algebraic notions of distance, and approximation algorithms have been proposed. This is similar in
spirit to the rich and beautiful literature on clustering problems with various objectives (e.g. k-center,
k-median, k-means). Similar to clustering, many of the variants of diversity maximization admit
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Preference for far clusters in sum-sum(?) maximization
constant factor approximation algorithms. Most of the known algorithms for diversity maximization
are based on a natural greedy approach, or on local search.
Our goal in this work is to develop novel linear programming formulations for diversity maximization and provide new approximation guarantees. Convex relaxation approaches are typically powerful in that they can incorporate additional constraints and additional objective functions, as we will
illustrate. This is important in some applications, and indeed, diversity maximization has been studied under additional knapsack [3] and matroid [2] constraints. In applications such as web search,
it is important to optimize diversity, along with other objectives, such as total relevance or coverage
(see [4]). Another contribution of this work is to explore approximation lower bounds for diversity
maximization. Given the simplicity of the best known algorithms for some objectives (e.g., greedy
addition, single-swap local search), it is natural to ask if better algorithms are possible. Rather
surprisingly, we show that the answer is no, for the most common objectives.
Objective functions. The many variants of diversity maximization differ in their choice of the
objective function, i.e., how they de?ne diversity of a set S of points. Our focus in this paper will be
distance based objectives, which can be de?ned over arbitrary metric spaces, via pairwise distances
between the chosen points. Let d(u, v) be the distance between points u and v, and for a set of points
T , let d(u, T ) = minv?T d(u, v). The three most common objectives are:
1. Min-min diversity, de?ned by min-min(S) = minu?S d(u, S \ u).
?
2. Sum-min diversity, de?ned by sum-min(S) = u?S d(u, S \ u).
?
?
3. Sum-sum diversity, de?ned by sum-sum(S) = u?S v?S d(u, v).
All three objectives have been used in applications [16]. Of these min-min and sum-sum are also
known to admit constant factor approximation algorithms. In fact, a natural greedy algorithm gives
a factor 1/2 approximation for min-min, while local search gives a constant factor approximation
for sum-sum, even with matroid constraints [6, 2, 4]. However, for the sum-min objective, the best
known algorithm had an approximation factor of O(1/ log n) [6] and no inapproximability results
were known. Combinatorial methods such as greedy and local search fail (see Lemma 1), and
achieving a constant factor approximation has remained a challenge. Compared to the other objectives, the sets that maximize the sum-min objective have properties that are desirable in practice,
as observed in [16], and demonstrated in our experiments. We will now outline some theoretical
reasons.
Drawbacks of the min-min and sum-sum objectives. The main problem with min-min stems
from the fact that it solely depends on the closest pair of chosen points, and it does not capture
the distance distribution between the chosen points well. Another concern is that it is highly nonmonotone in the size of |S| ? in applications such as search, it is paradoxical for the diversity to
take a sharp drop once we add one extra element to the set of search results. The sum-sum objective
is much more robust, and is hence much more popular in applications. However, as also noted
in [16], it tends to promote picking too many ?corner? points. To illustrate, suppose we have a set
of points that fall into k clusters (which is common in candidate search results). Suppose the points
are distributed as a mixture of k equally spaced Gaussians on a line (see Figure 1). The intuitively
desired solution is to pick one point from each of the clusters. However the optimizer for sum-sum
picks all the points from the farthest two clusters (shown shaded in Figure 1).
The sum-min objective inherits the good properties of both ? it is robust to a small number of
additions/removals, and it tries to ensure that each point is far from the others. However, it is
trickier to optimize, as we mentioned earlier. In fact, in the supplement, Section E, we show that:
Lemma 1. The natural
? Greedy and Local-Search algorithms for sum-min diversity have an approximation ratio of O(1/ k).
2
Our contributions. With these motivations, we study the problem of maximizing sum-min diversity subject to a cardinality constraint ? max|S|?k sum-min(S). Our main algorithmic results
are:
? We give a factor 1/8 approximation for sum-min diversity maximization with cardinality
constraint (the ?rst constant factor approximation). Indeed, when k is a large enough con1
stant, we give a (roughly) 2e
-approximation. This is presented in Section 2 to illustrate
our ideas (Theorem 1). The algorithm can also incorporate arbitrary concave functions of
distance, as well as explicit constraints to avoid duplicates (end of Section 2).
? We show that the 1/8 approximation holds when we replace cardinality constraints with arbitrary matroid constraints. Such constraints arise in applications such as product search [3]
or news aggregators [2] where it is desirable to report items from different brands or different news agencies. This can be modeled as a partition matroid.
? Our formulation can be used to maximize the sum-min diversity, along with total relevance
or coverage objectives (Theorem 3). This is motivated by applications in recommender
systems in which we also want the set of results we output to cover a large range of topics [4, 2], or have a high total relevance to a query.
Next, we show that for both the sum-sum and the sum-min variants of diversity maximization, obtaining an approximation factor better than 1/2 is hard, under the planted clique assumption (Theorem 5). (We observe that such a result for min-min is easy, by a reduction from independent set.)
This implies that the simple local search algorithms developed for the sum-sum diversity maximization problem [6, 10, 11] are the best possible under the planted clique assumption.
Finally, we study the empirical performance of our algorithm on several standard datasets. Our
goal here is two-fold: ?rst, we make an experimental case for the sum-min objective, by comparing
the quality of the solutions output by our algorithm (which aims to maximize sum-min) with other
popular algorithms (that maximize sum-sum). This is measured by how well the solution covers
various clusters in the data, as well as by measuring quality in a feature selection task. Second, we
study the approximation quality of the algorithm on real datasets, and observe that it performs much
better than the theoretical guarantee (factor 1/8).
1.1
Notation and Preliminaries
Throughout, (V, d) will denote the metric space we are working with, and we will write n = |V |.
The number of points we need to output will, unless speci?ed otherwise, be denoted by k.
Approximation factor. We say that an algorithm provides an ? factor approximation if, on every
instance, it outputs a solution whose objective value is at least ? ? opt, where opt is the optimum
value of the objective. (Since we wish to maximize our diversity objectives, ? will be ? 1, and
ratios closer to 1 are better.)
Monotonicity of sum-min.We observe that our main objective, sum-min(?), is not monotone. I.e.,
sum-min(S ? u) could be ? sum-min(S) (for instance, if u is very close to one of the elements
of S). This means that it could be better for an algorithm to output k ? < k elements if the goal
is to maximize sum-min(?). However, this non-monotonicity is not too serious a problem, as the
following lemma shows (proof in the supplement, Section A.1).
Lemma 2. Let (V, d) be a metric space, and n = |V |. Suppose 1 < k < n/3 be the target number
of elements. Let S ? be any subset of V of size ? k. Then we can ef?ciently ?nd an S ? V of size
= k, such that sum-min(S) ? 1/4 ? sum-min(S ? ).
Since our aim is to design a constant factor approximation algorithm, in what follows, we will allow
our algorithms to output ? k elements (we can then use the lemma above to output precisely k).
Matroid constraints. Let D be a ground set of elements (which in our case, it will be V or its
subset). A matroid M is de?ned by I, a family of subsets of D, called the independent sets of the
matroid. I is required to have the properties of being subset-closed and having the basis exchange
property (see Schrijver [22] for details). Some well-studied matroids which we consider are: (a) the
uniform matroid of rank k, for which we have I := {X ? D : |X| ? k}, (b) partition matroids,
which are the direct sum of uniform matroids.
3
In matroid constrained diversity maximization, we are given a matroid M as above, and the goal is
to output an element of I that maximizes diversity. Note that if M is the uniform matroid, this is
equivalent to a cardinality constraint. The matroid polytope P (M), de?ned to be the convex hull
of the indicator vectors of sets in I, plays a key role in optimization under matroid constraints. For
most matroids of practical interest, it turns out optimization over P (M) can be done in polynomial
time.
2
Basic Linear Programming Formulation
We will now illustrate the main ideas behind our LP framework. We do so by proving a slightly
simpler form of our result, where we assume that k is not too small. Speci?cally, we show that:
Theorem 1. Let (V, d) be a metric space on n points, and let ?, k be parameters that satisfy ? ?
(0, 1) and k > 8 log(1/?)/?2 . There is a randomized polynomial time algorithm that outputs a set
S ? V of size ? k with E[sum-min(S)] ? 1?2?
2e ? opt, where opt is the largest possible sum-min()
value for a subset of V of size ? k.
The main challenge in formulating an LP for the sum-min objective is to capture the quantity d(u, S \
u). The key trick is to introduce new variables to do so. To make things formal, for i ? V , we denote
by Ri = {d(i, j) : j ?= i} the set of candidate distances from i to its closest point in S. Next, let
B(i, r) denote the ?open? ball of radius r centered at i, i.e., B(i, r) = {j ? V : d(i, j) < r}; and
let B ? (i, r) = B(i, r/2) denote the ball of half the radius.
The LP we consider is as follows: we have a variable xir for each i ? V and r ? Ri which is
supposed to be 1 iff
?i ? S and r = minj?S\{i} d(i, j). Thus for every i, at most one xir is 1 and the
rest are 0. Hence i,r?Ri xir ? k for the intended solution. The other set of constraints we add is
the following: for each u ? V ,
?
xir ? 1.
(?gure in Section A.3 of supplement)
(1)
i?V,r?Ri :u?B ? (i,r)
These constraints are the crux of our LP formulation. They capture the fact that if we take any
solution S ? V , the balls B(s, r/2), where s ? S and r = d(s, S \ {s}) are disjoint. This is
because if u ? B ? (i1 , r1 ) ? B ? (i2 , r2 ), then assuming r1 ? r2 (w.l.o.g.), triangle inequality implies
that d(i1 , i2 ) < r1 (the strict inequality is because we de?ned the balls to be ?open?); Thus, in an
integral solution, we will set at most one of xi1 r1 and xi2 r2 to 1. The full LP can now be written as
follows
??
maximize
xir ? r subject to
i
r?Ri
?
i?V,r?Ri
?
i?V,r?Ri :u?B ? (i,r)
xir ? k,
xir ? 1
for all u ? V ,
0 ? xir ? 1.
The algorithm then proceeds by solving this LP, and rounding via the procedure de?ned below.
Note that after step 2, we may have pairs with the same ?rst coordinate, since we round them
independently. But after step 3, this will not happen, as all but one of them will have been removed.
procedure round(x)
// LP solution (x)
1: Initialize S = ?.
2: Add (i, r) to S with probability (1 ? ?)(1 ? e?xir ) (independent of the other point-radius pairs).
3: If (i, r) ?= (j, r ? ) ? S such that r ? r ? and i ? B ? (j, r ? ), remove (i, r) from S.
4: If |S| > k, abort (i.e., return ? which has value 0); else return S, the set of ?rst coordinates of
S.
Running time. The LP as described contains n2 variables, n for each vertex. This can easily be
reduced to O(log n) per vertex, by only considering r in multiples of (1 + ?), for some ?xed ? > 0.
4
Further, we note that the LP is a packing LP. Thus it can be solved in time that is nearly linear in the
size (and can be solved in parallel) [19].
Analysis. Let us now show that round returns a solution to with large expected value for the
objective (note that due to the last step, it always returns a feasible solution, i.e., size ? k). The idea
is to write the expected diversity as a sum of appropriately de?ned random variables, and then use
the linearity of expectation. For a (vertex, radius) pair (i, r), de?ne ?ir to be an indicator random
variable that is 1 iff (a) the pair (i, r) is picked in step 2, (b) it is not removed in step 3, and (c)
|S| ? k after step 3. Then we have the following.
Lemma 3. Let ?
S be the solution output by the algorithm, and de?ne ?ir as above. Then we have
sum-min(S) ? i,r 2r ? ?ir .
Proof. If the set S after step 3 is of size > k, each ?ir = 0, and so there is nothing to prove.
Otherwise, consider the set S at the end of step 3 and consider two pairs (i, r), (j, r? ) ? S. The fact
that both of them survived step 3 implies that d(i, j) ? max(r, r? )/2. Thus d(i, j) ? r/2 for any
j ?= i in the output, which implies that the contribution of i to the sum-min objective is ? r/2. This
completes the proof.
Now, we will ?x one pair (i, r) and show a lower bound on Pr[?ir = 1].
Lemma 4. Consider the execution of the algorithm, and consider some pair (i, r). De?ne ?ir as
above. We have Pr[?ir = 1] ? (1 ? 2?)xir /e.
Proof. Let T be the set of all (point, radius) pairs (j, r? ) such that (i, r) ?= (j, r? ), i ? B ? (j, r? ), and
r? ? r. Now, the condition (b) in the de?nition of ?ir is equivalent to the condition that none of the
pairs in T are picked in step 2. Let us denote by ?(a) (resp., ?(b) , ?(c) ) the indicator variable for the
condition (a) (resp. (b), (c)) in the de?nition of ?ir . We need to lower bound Pr[?(a) ? ?(b) ? ?(c) ].
To this end, note that
Pr[?(a) ? ?(b) ? ?(c) ] = Pr[?(a) ? ?(b) ] ? Pr[?(a) ? ?(b) ? ?(c) ]
? Pr[?(a) ? ?(b) ] ? Pr[?(a) ? ?(c) ].
(2)
Here
denotes the complement of ? , i.e., the event |S| > k at the end of step 3. Now, since
the rounding selects pairs independently, we can lower bound the ?rst term as
?
?
? ? ?
Pr[?(a) ? ?(b) ] ? (1 ? ?) 1 ? e?xir
1 ? (1 ? ?)(1 ? e?xjr? )
?(c)
(c)
?
? (1 ? ?) 1 ? e
?
?xir
(j,r ? )?T
?
e?xjr?
(3)
(j,r ? )?T
?
Now, we can upper bound (j,r? )?T xjr? , by noting that for all such pairs, B ? (j, r? ) contains i, and
?
thus the LP constraint for i implies that (j,r? )?T xjr? ? 1 ? xir . Plugging this into (3), we get
?
?
exir ? 1
? (1 ? ?)xir /e.
Pr[?(a) ? ?(b) ] ? (1 ? ?) 1 ? e?xir e?(1?xir ) = (1 ? ?)
e
We then need to upper bound the second term of (2). This is done using a Chernoff bound, which
then implies the lemma. (see the Supplement, Section A.2 for details).
Proof of Theorem 1. The proof follows from Lemmas 3 and 4, together with linearity of expectation.
For details, see Section A.3 of the supplementary material.
Direct Extensions. We mention two useful extensions that follow from our argument.
(1) We can explicitly prevent the LP from picking points that are too close to each other (near
duplicates). Suppose we are only looking for solutions in which every pair of points are at least a
distance ? . Then, we can modify the set of ?candidate? distances Ri for each vertex to only include
those ? ? . This way, in the ?nal solution, all the chosen points are at least ? /2 apart.
(2) Our approximation guarantee also holds if the
has any monotone concave function g()
?objective
?
of d(u, S \ u). In the LP, we could maximize i r?Ri xir ? g(r), and the monotone concavity
(which implies g(r/2) ? g(r)/2) ensures the same approximation ratio. In some settings, having a
cap on a vertex?s contribution to the objective is useful (e.g., bounding the effect of outliers).
5
3
General Matroid Constraints
Let us now state our general result. It removes the restriction on k, and has arbitrary matroid constraints, as opposed to cardinality constraints in Section 2.
Theorem 2. Let (V, d) be a metric space on n points, and let M = (V, I) be a matroid on V . Then
there is an ef?cient randomized algorithm1 to ?nd an S ? I whose expected sum-min(S) value is at
least opt/8, where opt = maxI?I sum-min(I).
The algorithm proceeds by solving an LP relaxation as before.
? The key differences in the formulation are: ?
(1) we introduce new opening variables yi := r?Ri xir for each i ? V , and (2) the
constraint i yi ? k (which we had written in terms of the x variables) is now replaced with a general matroid constraint, which states that y ? P (M). See Section B (of the supplementary material)
for the full LP.
This LP is now rounded using a different procedure, which we call generalized-round. Here, instead
of independent rounding, we employ the randomized swap rounding algorithm (or the closely related
pipage rounding) of [7], followed by a randomized rounding step.
procedure generalized-round(y, x)
// LP solution (y, x).
1: Initialize S = ?.
2: Apply randomized swap rounding to the vector y/2 to obtain Y ? {0, 1}V ? P (M).
3: For each i with Yi = 1, add i to S and sample a radius ri according to the probability distribution
that selects r ? Ri with probability xir /yi .
4: If i ? B ? (j, rj ) with i ?= j ? S and rj ? ri , remove i from S.
5: Return S.
Note that the rounding outputs S, along with an ri value for each i ? S. The idea behind the analysis
is that this rounding has the same properties as randomized rounding, while ensuring that S is an
independent set of M. The details, and the proof of Theorem 2 are deferred to the supplementary
material (Section B).
4
Additional Objectives and Hardness
The LP framework allows us to incorporate ?secondary objectives?. As an example, consider the
problem of selecting search results, in which every candidate page has a relevance to the query,
along with the metric between pages. Here, we are interested in selecting a subset with a high total
relevance, in addition to a large value of sum-min(). A generalization of relevance is coverage.
Suppose every page u comes with a set Cu of topics it covers. Now consider the problem of picking
a set S of pages so as to simultaneously maximize sum-min() and the total coverage, i.e., the size
of the union ?u?S Cu , subject to cardinality constraints. (Coverage generalizes relevance, because
if the sets Cu are all disjoint, then |Cu | acts as the relevance of u.)
Because we have a simple formulation and rounding procedure, we can easily incorporate a coverage
(and therefore relevance) objective into our LP, and obtain simultaneous guarantees. We prove the
following: (A discussion of the theorem and its proof are deferred to Section C.)
Theorem 3. Let (V, d) be a metric space and let {Cu : u ? V } be a collection of subsets of
a universe [m]. Suppose there exists a set S ? ? V of size ? k with sum-min(S ? ) = opt, and
| ?u?S ? Cu | = C. Then there is an ef?cient randomized algorithm that outputs a set S satisfying:
(1) E[|S|] ? k, (2) E[sum-min(S)] ? opt/8, and (3) E[| ?u?S Cu |] ? C/16.
4.1
Hardness Beyond Factor 1/2
For diversity maximization under both the sum-sum and the sum-min objectives, we show that obtaining approximation ratios better than 2 is unlikely, by a reduction from the so-called planted
clique problem. Such a reduction for sum-sum was independently obtained by Borodin et al. [4].
For completeness, we provide the details and proof in the supplementary material (Section D).
1
Assuming optimization over P (M) can be done ef?ciently, which is true for all standard matroids.
6
5
Experiments
Goals and design. The goal of our experiments is to evaluate the sum-min objective as well as
the approximation quality of our algorithm on real datasets. For the ?rst of the two, we consider
the k-element subsets obtained by maximizing the sum-min objective (using a slight variant of our
algorithm), and compare their quality (in terms of being representative of the data) with subsets obtained by maximizing the sum-sum objective, which is the most commonly used diversity objective.
Since measuring the quality as above is not clearly de?ned, we come up with two measures, using
datasets that have a known clustering:
(1) First, we see how well the different clusters are represented in the chosen subset. This is important in web search applications, and we do this in two ways: (a) by measuring the number of distinct
clusters present, and (b) by observing the ?non-uniformity? in the number of nodes picked from the
different clusters, measured as a deviation from the mean.
(2) Second, we consider feature-selection. Here, we consider data in which each object has several
features, and then we pick a subset of the features (treating each feature as a vector of size equal
to the number of data points). Then, we restrict data to just the chosen features, and see how well
3-NN clustering in the obtained space (which is much faster to perform than in the original space,
due to the reduced number of features) compares with ground-truth clustering.
Let us go into the details of (1) above. We used two datasets with ground-truth clusterings. The
?rst is COIL100, which contains images of 100 different objects [17]. It includes 72 images per
object. We convert them into 32 ? 32 grayscale images and consider 6 pictures per object. We used
Euclidean distance as the metric. The second dataset is CDK2 ? a drug discovery dataset publicly
available in BindingDB.org [15, 1]. It contains 2253 compounds in 151 different clusters. Tanimoto
distance, which is widely used in the drug discovery literature (and is similar to Jaccard distance),
was used as the metric. Figure 2 (top) shows the number of distinct clusters picked by algorithms
for the two objectives, and (bottom) shows the non-uniformity in the #(elements) picked from the
different clusters (mean std deviation). We note that throughout this section, augmented LP is the
algorithm that ?rst does our LP rounding, and then adds nodes in a greedy manner to as to produce
a subset of size precisely k (since randomized rounding could produce smaller sets).
(a) COIL100 coverage
(b) CDK2 coverage
(c) COIL100 non-uniformity
(d) CDK2 non-uniformity
Figure 2: Sum-min vs Sum-sum objectives ? how chosen subsets represent clusters
7
Now consider (2) above ? feature selection. We used two handwritten text datasets. Multiple Features is a dataset of handwritten digits (649 features, 2000 instances [14]). USPS is a dataset of
handwritten text (256 features, 9298 instances [12, 5]). We used the Euclidean distance as the metric
(we could use more sophisticated features to compute distance, but even the simplistic one produces
good results). Figure 3 shows the performance of the features selected by various algorithms.
(a) Multiple Features dataset
(b) USPS dataset
Figure 3: Comparing outputs of feature selection via 3-NN classi?cation with 10-fold cross validation.
Next, we evaluate the practical performance of our LP algorithm, in terms of the proximity to the
optimum objective value. Since we do not know the optimum, we compare it with the minimum of
two upper bounds: the ?rst is simply the value of the LP solution. The second is obtained as follows.
For every i, let tji denote the jth largest distance from i to other points in the dataset. The sum of
k largest elements of {tk?1
|i = 1, . . . , n} is clearly an upper bound on the sum-min objective, and
i
sometimes it could be better than the LP optimum. Figure 4 shows the percentage of the minimum
of the upperbounds that the augmented-LP algorithm achieves for two datasets [14, 18, 12, 8]. Note
that it is signi?cantly better than the theoretical guarantee 1/8. In fact, by adding the so-called clique
constraints on the LP, we can obtain an even better bounds on the approximation ratio. However,
this will result in a quadratic number of constraints, making the LP approach slow. Figure 4 also
depicts the value of the simple LP algorithm (without augmenting to select k points).
Finally, we point out that for many of the datasets we consider, there is no signi?cant difference
between the LP based algorithm, and the Local Search (and sometimes even the Greedy) heuristic in
terms of the sum-min objective value. However, as we noted, the heuristics do not have worst case
guarantees. A comparision is shown in Figure 4 (c).
(a) Madelon dataset
(b) USPS dataset
(c) COIL100 dataset
Figure 4: (a) and (b) show the approximation factor of LP and augmented LP algorithms; (c) compares
Augmented LP with Greedy and LocalSearch in terms of sum-min objective value
Conclusions. We have presented an approximation algorithm for diversity maximization, under
the sum-min objective, by developing a new linear programming (LP) framework for the problem.
Sum-min diversity turns out to be very effective at picking representatives from clustered data ? a
fact that we have demonstrated experimentally. Simple algorithms such as Greedy and Local Search
could perform quite badly for sum-min diversity, which led us to the design of the LP approach.
The approximation factor turns out to be much better in practice (compared to 1/8, which is the
theoretical bound). Our LP approach is also quite general, and can easily incorporate additional
objectives (such as relevance), which often arise in applications.
8
References
[1] The binding database. http://www.bindingdb.org/. Accessed: 2016-05-01.
[2] Z. Abbassi, V. S. Mirrokni, and M. Thakur. Diversity maximization under matroid constraints. In KDD,
pages 32?40, 2013.
[3] S. Bhattacharya, S. Gollapudi, and K. Munagala. Consideration set generation in commerce search. In
S. Srinivasan, K. Ramamritham, A. Kumar, M. P. Ravindra, E. Bertino, and R. Kumar, editors, WWW,
pages 317?326. ACM, 2011.
[4] A. Borodin, H. C. Lee, and Y. Ye. Max-sum diversi?cation, monotone submodular functions and dynamic
updates. In M. Benedikt, M. Kr?otzsch, and M. Lenzerini, editors, PODS, pages 155?166. ACM, 2012.
[5] D. Cai, X. He, J. Han, and T. S. Huang. Graph regularized nonnegative matrix factorization for data
representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(8):1548?1560,
2011.
[6] B. Chandra and M. M. Halld?orsson. Approximation algorithms for dispersion problems. Journal of
algorithms, 38(2):438?465, 2001.
[7] C. Chekuri, J. Vondrak, and R. Zenklusen. Dependent randomized rounding via exchange properties of
combinatorial structures. In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages 575?584, Oct 2010.
[8] P. Duygulu, K. Barnard, J. F. G. de Freitas, and D. A. Forsyth. Object recognition as machine translation: Learning a lexicon for a ?xed image vocabulary. In Computer Vision - ECCV 2002, 7th European
Conference on Computer Vision, Copenhagen, Denmark, May 28-31, 2002, Proceedings, Part IV, pages
97?112, 2002.
[9] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound
for detecting planted cliques. In Proceedings of the Forty-?fth Annual ACM Symposium on Theory of
Computing, STOC ?13, pages 655?664, New York, NY, USA, 2013. ACM.
[10] S. Gollapudi and A. Sharma. An axiomatic approach for result diversi?cation. In J. Quemada, G. Le?on,
Y. S. Maarek, and W. Nejdl, editors, WWW, pages 381?390. ACM, 2009.
[11] R. Hassin, S. Rubinstein, and A. Tamir. Approximation algorithms for maximum dispersion. Oper. Res.
Lett., 21(3):133?137, 1997.
[12] J. J. Hull. A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell.,
16(5):550?554, 1994.
[13] R. M. Karp. Probabilistic analysis of some combinatorial search problems. Algorithms and Complexity:
New Directions and Recent Results, pages 1?19, 1976.
[14] M. Lichman. UCI machine learning repository, 2013.
[15] T. Liu, Y. Lin, X. Wen, R. N. Jorissen, and M. K. Gilson. Bindingdb: a web-accessible database of
experimentally determined protein?ligand binding af?nities. Nucleic acids research, 35(suppl 1):D198?
D201, 2007.
[16] T. Meinl, C. Ostermann, and M. R. Berthold. Maximum-score diversity selection for early drug discovery.
Journal of chemical information and modeling, 51(2):237?247, 2011.
[17] S. Nayar, S. Nene, and H. Murase. Columbia object image library (coil 100). Department of Comp.
Science, Columbia University, Tech. Rep. CUCS-006-96, 1996.
[18] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information criteria of max-dependency,
max-relevance, and min-redundancy. Pattern Analysis and Machine Intelligence, IEEE Transactions on,
27(8):1226?1238, 2005.
[19] S. A. Plotkin, D. B. Shmoys, and E. Tardos. Fast approximation algorithms for fractional packing and
covering problems. In Proceedings of the 32Nd Annual Symposium on Foundations of Computer Science,
SFCS ?91, pages 495?504, Washington, DC, USA, 1991. IEEE Computer Society.
[20] L. Qin, J. X. Yu, and L. Chang. Diversifying top-k results. Proceedings of the VLDB Endowment,
5(11):1124?1135, 2012.
[21] F. Radlinski and S. T. Dumais. Improving Personalized Web Search using Result Diversi?cation. In
SIGIR, 2006.
[22] A. Schrijver. Combinatorial Optimization. Springer-Verlag, Berlin, 2003.
[23] N. Vasconcelos. Feature selection by maximum marginal diversity: optimality and implications for visual
recognition. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer
Society Conference on, volume 1, pages I?762?I?769 vol.1, June 2003.
[24] M. R. Vieira, H. L. Razente, M. C. Barioni, M. Hadjieleftheriou, D. Srivastava, C. Traina, and V. J.
Tsotras. On query result diversi?cation. In Data Engineering (ICDE), 2011 IEEE 27th International
Conference on, pages 1163?1174. IEEE, 2011.
9
| 6500 |@word madelon:1 cu:7 repository:1 polynomial:2 nd:5 open:2 vldb:1 pick:3 concise:1 mention:1 reduction:3 liu:1 contains:4 lichman:1 selecting:2 score:1 freitas:1 nonmonotone:1 com:1 comparing:3 yet:1 written:2 partition:2 happen:1 cant:1 kdd:1 remove:3 drop:1 treating:1 update:1 v:1 greedy:9 selected:2 half:1 item:1 intelligence:2 gure:1 provides:1 completeness:1 node:2 lexicon:1 preference:1 detecting:1 org:2 simpler:1 accessed:1 along:4 direct:2 symposium:3 focs:1 prove:3 fth:1 manner:1 introduce:2 pairwise:2 peng:1 indeed:2 expected:3 hardness:3 roughly:1 cardinality:7 considering:1 spain:1 notation:1 linearity:2 maximizes:1 what:1 xed:2 developed:1 finding:1 guarantee:6 every:6 act:1 concave:2 farthest:1 before:1 engineering:1 local:8 modify:1 tends:1 mach:1 solely:1 studied:4 shaded:1 factorization:1 ola:2 range:1 practical:2 commerce:1 testing:1 practice:2 minv:1 union:1 digit:1 procedure:5 survived:1 area:1 empirical:2 drug:4 protein:1 get:1 close:2 selection:9 optimize:2 equivalent:2 restriction:1 demonstrated:2 center:1 maximizing:3 www:3 go:1 independently:3 convex:2 pod:1 sigir:1 simplicity:1 proving:1 notion:2 coordinate:2 tardos:1 resp:2 target:1 suppose:6 play:1 programming:4 trick:1 element:12 satisfying:1 recognition:4 std:1 database:3 observed:1 role:1 bottom:1 ding:1 solved:2 sharif:2 capture:3 worst:1 ensures:1 news:3 removed:2 mentioned:1 agency:1 complexity:1 dynamic:1 uniformity:4 solving:2 swap:3 basis:1 triangle:1 packing:2 easily:4 usps:3 various:4 represented:1 separated:1 distinct:2 fast:1 effective:2 query:3 rubinstein:1 choosing:1 whose:2 heuristic:2 larger:1 supplementary:4 widely:1 say:1 quite:2 otherwise:2 benedikt:1 cai:1 product:1 qin:1 uci:1 reyzin:1 iff:2 supposed:1 gollapudi:2 rst:11 cluster:12 optimum:4 r1:4 produce:3 object:7 tk:1 illustrate:4 develop:2 augmenting:1 measured:2 coverage:8 c:1 signi:2 implies:7 come:2 murase:1 differ:1 direction:1 radius:6 drawback:1 closely:1 tji:1 hull:2 centered:1 munagala:1 material:4 exchange:2 crux:1 generalization:1 clustered:1 preliminary:1 opt:8 nities:1 extension:2 hold:2 xjr:4 proximity:1 ground:3 minu:1 algorithmic:2 optimizer:1 achieves:1 early:1 axiomatic:1 combinatorial:4 largest:3 clearly:2 always:1 aim:2 rather:1 avoid:1 karp:1 xir:19 focus:1 inherits:1 june:1 rank:1 tech:1 dependent:1 epfl:2 nn:2 typically:1 unlikely:1 i1:2 selects:2 interested:1 denoted:1 diversi:4 constrained:1 initialize:2 mutual:2 marginal:1 equal:1 once:1 having:2 washington:1 vasconcelos:1 chernoff:1 yu:1 nearly:1 hassin:1 promote:1 others:1 report:1 duplicate:2 serious:1 opening:1 employ:1 wen:1 simultaneously:1 intell:1 replaced:1 intended:1 interest:1 mining:1 highly:1 deferred:2 mixture:1 behind:2 implication:1 integral:1 closer:1 unless:1 iv:1 euclidean:2 desired:1 re:1 theoretical:4 vahab:1 con1:1 earlier:1 instance:4 modeling:1 cover:3 measuring:3 trickier:1 maximization:16 deviation:2 vertex:5 subset:18 uniform:3 rounding:14 too:4 dependency:1 answer:1 plotkin:1 dumais:1 st:1 fundamental:1 randomized:9 international:1 accessible:1 cantly:1 lee:1 xi1:1 probabilistic:1 picking:4 rounded:1 together:1 central:1 opposed:1 choose:1 huang:1 admit:2 corner:1 return:5 oper:1 zenklusen:1 diversity:39 de:17 includes:1 forsyth:1 satisfy:1 explicitly:1 depends:1 try:1 picked:5 closed:1 observing:1 aggregation:1 parallel:1 contribution:4 pipage:1 localsearch:1 ir:9 publicly:1 acid:1 spaced:1 handwritten:4 shmoys:1 produced:1 none:1 comp:1 cation:5 simultaneous:1 minj:1 nene:1 aggregator:1 ed:1 proof:9 dataset:10 popular:3 ask:1 cap:1 fractional:1 vieira:1 formalize:1 sophisticated:1 follow:1 formulation:6 done:3 just:1 chekuri:1 working:1 web:6 google:2 tanimoto:1 abort:1 quality:8 utah:2 effect:1 ye:1 usa:2 true:1 hence:2 chemical:1 i2:2 round:5 covering:1 noted:2 criterion:1 generalized:2 outline:1 performs:1 image:5 consideration:1 novel:2 ef:4 common:3 volume:1 slight:1 he:1 diversifying:1 feldman:1 submodular:1 had:2 han:1 etc:2 add:5 closest:2 recent:1 apart:1 compound:2 verlag:1 inequality:2 rep:1 yi:4 nition:2 minimum:3 additional:6 speci:2 forty:1 maximize:11 sharma:1 full:2 desirable:2 multiple:3 rj:2 stem:1 faster:1 af:1 cross:1 long:1 lin:1 equally:1 plugging:1 ensuring:1 variant:4 basic:1 simplistic:1 vision:3 metric:13 stant:1 expectation:2 chandra:1 represent:1 tailored:1 sometimes:2 suppl:1 addition:3 want:1 else:1 median:1 completes:1 appropriately:1 extra:1 rest:1 strict:1 subject:3 thing:1 spirit:1 call:1 ciently:2 near:1 noting:1 enough:1 easy:1 matroid:19 restrict:1 idea:4 motivated:1 algebraic:1 york:1 generally:1 useful:2 coil100:4 reduced:2 http:1 percentage:1 ravindra:1 disjoint:2 per:3 diverse:4 write:2 vol:1 srinivasan:1 key:3 redundancy:1 achieving:1 prevent:1 ce:1 nal:1 graph:1 relaxation:3 monotone:4 icde:1 sum:76 convert:1 powerful:1 throughout:2 family:1 jaccard:1 bound:12 followed:1 fold:2 quadratic:1 nonnegative:1 badly:1 annual:3 comparision:1 constraint:26 precisely:2 ri:14 personalized:1 vondrak:1 argument:1 min:57 prescribed:1 formulating:1 kumar:2 duygulu:1 vempala:1 optimality:1 ned:10 department:1 developing:1 according:1 ball:4 smaller:1 slightly:1 nejdl:1 lp:38 making:1 intuitively:1 outlier:1 pr:10 grigorescu:1 mutually:1 turn:3 fail:1 xi2:1 know:1 end:4 generalizes:1 gaussians:1 available:1 apply:1 observe:3 bhattacharya:1 algorithm1:1 knapsack:1 original:1 denotes:1 clustering:6 ensure:1 running:1 include:1 top:2 paradoxical:1 cally:1 upperbounds:1 society:2 objective:47 quantity:1 planted:5 mehrdad:1 mirrokni:3 distance:17 berlin:1 topic:2 polytope:1 reason:1 denmark:1 assuming:2 modeled:1 ratio:5 stoc:1 design:4 anal:1 summarization:1 perform:2 recommender:3 upper:4 nucleic:1 dispersion:2 datasets:9 looking:1 dc:1 arbitrary:4 sharp:1 complement:1 pair:13 required:1 copenhagen:1 cucs:1 barcelona:1 nip:1 trans:1 beyond:1 proceeds:2 below:1 pattern:4 borodin:2 challenge:2 including:1 max:5 event:1 natural:5 beautiful:1 bindingdb:3 indicator:3 regularized:1 jorissen:1 technology:1 library:1 ne:4 picture:1 columbia:2 text:3 literature:3 discovery:4 removal:1 generation:1 validation:1 foundation:2 xiao:1 editor:3 endowment:1 translation:1 eccv:1 surprisingly:1 last:1 jth:1 formal:1 allow:1 fall:1 matroids:5 distributed:1 lett:1 vocabulary:1 berthold:1 rich:1 concavity:1 tamir:1 collection:4 commonly:1 far:2 transaction:2 bertino:1 clique:6 monotonicity:2 sfcs:1 grayscale:1 search:20 svensson:2 robust:2 obtaining:2 improving:1 european:1 main:5 universe:1 motivation:1 bounding:1 arise:2 n2:1 nothing:1 augmented:4 representative:3 cient:2 depicts:1 slow:1 ny:1 explicit:1 obeying:1 wish:1 candidate:4 bhaskara:2 theorem:9 remained:1 maxi:1 r2:3 concern:1 exists:1 adding:1 kr:1 supplement:4 execution:1 led:1 simply:1 explore:1 visual:1 aditya:1 inapproximability:1 chang:1 binding:2 ligand:1 ch:1 springer:1 truth:2 srivastava:1 acm:5 coil:1 oct:1 goal:10 replace:1 barnard:1 feasible:1 hard:1 experimentally:2 determined:1 classi:1 lemma:9 called:4 total:5 secondary:2 experimental:1 schrijver:2 traina:1 brand:1 select:1 radlinski:1 relevance:11 incorporate:6 evaluate:2 nayar:1 |
6,082 | 6,501 | Deep Exploration via Bootstrapped DQN
Ian Osband1,2 , Charles Blundell2 , Alexander Pritzel2 , Benjamin Van Roy1
1
Stanford University, 2 Google DeepMind
{iosband, cblundell, apritzel}@google.com, bvr@stanford.edu
Abstract
Efficient exploration remains a major challenge for reinforcement learning
(RL). Common dithering strategies for exploration, such as ?-greedy, do
not carry out temporally-extended (or deep) exploration; this can lead
to exponentially larger data requirements. However, most algorithms for
statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to
efficient exploration with generalization, but existing algorithms are not
compatible with nonlinearly parameterized value functions. As a first step
towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep
neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially
improves learning speed and cumulative performance across most games.
1
Introduction
We study the reinforcement learning (RL) problem where an agent interacts with an unknown
environment. The agent takes a sequence of actions in order to maximize cumulative rewards.
Unlike standard planning problems, an RL agent does not begin with perfect knowledge
of the environment, but learns through experience. This leads to a fundamental trade-off
of exploration versus exploitation; the agent may improve its future rewards by exploring
poorly understood states and actions, but this may require sacrificing immediate rewards. To
learn efficiently an agent should explore only when there are valuable learning opportunities.
Further, since any action may have long term consequences, the agent should reason about
the informational value of possible observation sequences. Without this sort of temporally
extended (deep) exploration, learning times can worsen by an exponential factor.
The theoretical RL literature offers a variety of provably-efficient approaches to deep exploration [9]. However, most of these are designed for Markov decision processes (MDPs) with
small finite state spaces, while others require solving computationally intractable planning
tasks [8]. These algorithms are not practical in complex environments where an agent must
generalize to operate effectively. For this reason, large-scale applications of RL have relied
upon statistically inefficient strategies for exploration [12] or even no exploration at all [23].
We review related literature in more detail in Section 4.
Common dithering strategies, such as ?-greedy, approximate the value of an action by
a single number. Most of the time they pick the action with the highest estimate, but
sometimes they choose another action at random. In this paper, we consider an alternative
approach to efficient exploration inspired by Thompson sampling. These algorithms have
some notion of uncertainty and instead maintain a distribution over possible values. They
explore by randomly select a policy according to the probability it is the optimal policy.
Recent work has shown that randomized value functions can implement something similar
to Thompson sampling without the need for an intractable exact posterior update. However,
this work is restricted to linearly-parameterized value functions [16]. We present a natural
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
extension of this approach that enables use of complex non-linear generalization methods
such as deep neural networks. We show that the bootstrap with random initialization can
produce reasonable uncertainty estimates for neural networks at low computational cost.
Bootstrapped DQN leverages these uncertainty estimates for efficient (and deep) exploration.
We demonstrate that these benefits can extend to large scale problems that are not designed
to highlight deep exploration. Bootstrapped DQN substantially reduces learning times and
improves performance across most games. This algorithm is computationally efficient and
parallelizable; on a single machine our implementation runs roughly 20% slower than DQN.
2
Uncertainty for neural networks
Deep neural networks (DNN) represent the state of the art in many supervised and reinforcement learning domains [12]. We want an exploration strategy that is statistically
computationally efficient together with a DNN representation of the value function. To
explore efficiently, the first step to quantify uncertainty in value estimates so that the agent
can judge potential benefits of exploratory actions. The neural network literature presents a
sizable body of work on uncertainty quantification founded on parametric Bayesian inference
[3, 7]. We actually found the simple non-parametric bootstrap with random initialization [5]
more effective in our experiments, but the main ideas of this paper would apply with any
other approach to uncertainty in DNNs.
The bootstrap principle is to approximate a population distribution by a sample distribution
[6]. In its most common form, the bootstrap takes as input a data set D and an estimator ?.
? of cardinality equal
To generate a sample from the bootstrapped distribution, a data set D
to that of D is sampled uniformly with replacement from D. The bootstrap sample estimate
? The bootstrap is widely hailed as a great advance of 20th century
is then taken to be ?(D).
applied statistics and even comes with theoretical guarantees [2]. In Figure 1a we present
an efficient and scalable method for generating bootstrap samples from a large and deep
neural network. The network consists of a shared architecture with K bootstrapped ?heads?
branching off independently. Each head is trained only on its bootstrapped sub-sample
? The shared network learns a
of the data and represents a single bootstrap sample ?(D).
joint feature representation across all the data, which can provide significant computational
advantages at the cost of lower diversity between heads. This type of bootstrap can be
trained efficiently in a single forward/backward pass; it can be thought of as a data-dependent
dropout, where the dropout mask for each head is fixed for each data point [19].
(a) Shared network architecture (b) Gaussian process posterior
(c) Bootstrapped neural nets
Figure 1: Bootstrapped neural nets can produce reasonable posterior estimates for regression.
Figure 1 presents an example of uncertainty estimates from bootstrapped neural networks on
a regression task with noisy data. We trained a fully-connected 2-layer neural networks with
50 rectified linear units (ReLU) in each layer on 50 bootstrapped samples from the data.
As is standard, we initialize these networks with random parameter values, this induces an
important initial diversity in the models. We were unable to generate effective uncertainty
estimates for this problem using the dropout approach in prior literature [7]. Further details
are provided in Appendix A.
3
Bootstrapped DQN
q?
For a policy ? we define the value of an action a in state s Q? (s, a) := Es,a,? [ t=1 ? t rt ],
where ? ? (0, 1) is a discount factor that balances immediate versus future rewards rt . This
expectation indicates that the initial state is s, the initial action is a, and thereafter actions
2
are selected by the policy ?. The optimal value is Q? (s, a) := max? Q? (s, a). To scale to
large problems, we learn a parameterized estimate of the Q-value function Q(s, a; ?) rather
than a tabular encoding. We use a neural network to estimate this value.
The Q-learning update from state st , action at , reward rt and new state st+1 is given by
?t+1 ? ?t + ?(ytQ ? Q(st , at ; ?t ))?? Q(st , at ; ?t )
(1)
where ? is the scalar learning rate and ytQ is the target value rt + ? maxa Q(st+1 , a; ?? ). ??
are target network parameters fixed ?? = ?t .
Several important modifications to the Q-learning update improve stability for DQN [12].
First the algorithm learns from sampled transitions from an experience buffer, rather than
learning fully online. Second the algorithm uses a target network with parameters ?? that
are copied from the learning network ?? ? ?t only every ? time steps and then kept fixed in
between updates. Double DQN [25] modifies the target ytQ and helps further1 :
!
"
ytQ ? rt + ? max Q st+1 , arg max Q(st+1 , a; ?t ); ?? .
(2)
a
a
Bootstrapped DQN modifies DQN to approximate a distribution over Q-values via the
bootstrap. At the start of each episode, bootstrapped DQN samples a single Q-value function
from its approximate posterior. The agent then follows the policy which is optimal for
that sample for the duration of the episode. This is a natural adaptation of the Thompson
sampling heuristic to RL that allows for temporally extended (or deep) exploration [21, 13].
We implement this algorithm efficiently by building up K ? N bootstrapped estimates
of the Q-value function in parallel as in Figure 1a. Importantly, each one of these value
function function heads Qk (s, a; ?) is trained against its own target network Qk (s, a; ?? ).
This means that each Q1 , .., QK provide a temporally extended (and consistent) estimate
of the value uncertainty via TD estimates. In order to keep track of which data belongs to
which bootstrap head we store flags w1 , .., wK ? {0, 1} indicating which heads are privy to
which data. We approximate a bootstrap sample by selecting k ? {1, .., K} uniformly at
random and following Qk for the duration of that episode. We present a detailed algorithm
for our implementation of bootstrapped DQN in Appendix B.
4
Related work
The observation that temporally extended exploration is necessary for efficient reinforcement
learning is not new. For any prior distribution over MDPs, the optimal exploration strategy
is available through dynamic programming in the Bayesian belief state space. However, the
exact solution is intractable even for very simple systems[8]. Many successful RL applications
focus on generalization and planning but address exploration only via inefficient exploration
[12] or even none at all [23]. However, such exploration strategies can be highly inefficient.
Many exploration strategies are guided by the principle of ?optimism in the face of uncertainty?
(OFU). These algorithms add an exploration bonus to values of state-action pairs that
may lead to useful learning and select actions to maximize these adjusted values. This
approach was first proposed for finite-armed bandits [11], but the principle has been extended
successfully across bandits with generalization and tabular RL [9]. Except for particular
deterministic contexts [27], OFU methods that lead to efficient RL in complex domains
have been computationally intractable. The work of [20] aims to add an effective bonus
through a variation of DQN. The resulting algorithm relies on a large number of hand-tuned
parameters and is only suitable for application to deterministic problems. We compare our
results on Atari to theirs in Appendix D and find that bootstrapped DQN offers a significant
improvement over previous methods.
Perhaps the oldest heuristic for balancing exploration with exploitation is given by Thompson
sampling [24]. This bandit algorithm takes a single sample from the posterior at every time
step and chooses the action which is optimal for that time step. To apply the Thompson
sampling principle to RL, an agent should sample a value function from its posterior. Naive
applications of Thompson sampling to RL which resample every timestep can be extremely
1
In this paper we use the DDQN update for all DQN variants unless explicitly stated.
3
inefficient. The agent must also commit to this sample for several time steps in order to
achieve deep exploration [21, 8]. The algorithm PSRL does exactly this, with state of the
art guarantees [13, 14]. However, this algorithm still requires solving a single known MDP,
which will usually be intractable for large systems.
Our new algorithm, bootstrapped DQN, approximates this approach to exploration via
randomized value functions sampled from an approximate posterior. Recently, authors have
proposed the RLSVI algorithm which accomplishes this for linearly parameterized value
functions. Surprisingly, RLSVI recovers state of the art guarantees in the setting with
tabular basis functions, but its performance is crucially dependent upon a suitable linear
representation of the value function [16]. We extend these ideas to produce an algorithm
that can simultaneously perform generalization and exploration with a flexible nonlinear
value function representation. Our method is simple, general and compatible with almost all
advances in deep RL at low computational cost and with few tuning parameters.
5
Deep Exploration
Uncertainty estimates allow an agent to direct its exploration at potentially informative states
and actions. In bandits, this choice of directed exploration rather than dithering generally
categorizes efficient algorithms. The story in RL is not as simple, directed exploration is not
enough to guarantee efficiency; the exploration must also be deep. Deep exploration means
exploration which is directed over multiple time steps; it can also be called ?planning to
learn? or ?far-sighted? exploration. Unlike bandit problems, which balance actions which
are immediately rewarding or immediately informative, RL settings require planning over
several time steps [10]. For exploitation, this means that an efficient agent must consider the
future rewards over several time steps and not simply the myopic rewards. In exactly the
same way, efficient exploration may require taking actions which are neither immediately
rewarding, nor immediately informative.
To illustrate this distinction, consider a simple deterministic chain {s?3 , .., s+3 } with three
step horizon starting from state s0 . This MDP is known to the agent a priori, with
deterministic actions ?left? and ?right?. All states have zero reward, except for the leftmost
state s?3 which has known reward ? > 0 and the rightmost state s3 which is unknown. In
order to reach either a rewarding state or an informative state within three steps from s0 the
agent must plan a consistent strategy over several time steps. Figure 2 depicts the planning
and look ahead trees for several algorithmic approaches in this example MDP. The action
?left? is gray, the action ?right? is black. Rewarding states are depicted as red, informative
states as blue. Dashed lines indicate that the agent can plan ahead for either rewards or
information. Unlike bandit algorithms, an RL agent can plan to exploit future rewards. Only
an RL agent with deep exploration can plan to learn.
(a) Bandit algorithm
(b) RL+dithering
(c) RL+shallow explore
(d) RL+deep explore
Figure 2: Planning, learning and exploration in RL.
4
5.1 Testing for deep exploration
We now present a series of didactic computational experiments designed to highlight the
need for deep exploration. These environments can be described by chains of length N > 3
in Figure 3. Each episode of interaction lasts N + 9 steps after which point the agent resets
to the initial state s2 . These are toy problems intended to be expository rather than entirely
realistic. Balancing a well known and mildly successful strategy versus an unknown, but
potentially more rewarding, approach can emerge in many practical applications.
Figure 3: Scalable environments that requires deep exploration.
These environments may be described by a finite tabular MDP. However, we consider
algorithms which interact with the MDP only through raw pixel features. We consider
two feature mappings ?1hot (st ) := (1{x = st }) and ?therm (st ) := (1{x ? st }) in {0, 1}N .
We present results for ?therm , which worked better for all DQN variants due to better
generalization, but the difference was relatively small - see Appendix C. Thompson DQN
is the same as bootstrapped DQN, but resamples every timestep. Ensemble DQN uses the
same architecture as bootstrapped DQN, but with an ensemble policy.
We say that the algorithm has successfully learned the optimal policy when it has successfully
completed one hundred episodes with optimal reward of 10. For each chain length, we ran
each learning algorithm for 2000 episodes across three seeds. We plot the median time to learn
in Figure 4, together with a conservative lower bound of 99 + 2N ?11 on the expected time to
learn for any shallow exploration strategy [16]. Only bootstrapped DQN demonstrates a
graceful scaling to long chains which require deep exploration.
Figure 4: Only Bootstrapped DQN demonstrates deep exploration.
5.2 How does bootstrapped DQN drive deep exploration?
Bootstrapped DQN explores in a manner similar to the provably-efficient algorithm PSRL
[13] but it uses a bootstrapped neural network to approximate a posterior sample for the value.
Unlike PSRL, bootstrapped DQN directly samples a value function and so does not require
further planning steps. This algorithm is similar to RLSVI, which is also provably-efficient
[16], but with a neural network instead of linear value function and bootstrap instead of
Gaussian sampling. The analysis for the linear setting suggests that this nonlinear approach
will work well so long as the distribution {Q1 , .., QK } remains stochastically optimistic [16],
or at least as spread out as the ?correct? posterior.
Bootstrapped DQN relies upon random initialization of the network weights as a prior
to induce diversity. Surprisingly, we found this initial diversity was enough to maintain
diverse generalization to new and unseen states for large and deep neural networks. This
is effective for our experimental setting, but will not work in all situations. In general it
may be necessary to maintain some more rigorous notion of ?prior?, potentially through
the use of artificial prior data to maintain diversity [15]. One potential explanation for the
efficacy of simple random initialization is that unlike supervised learning or bandits, where
all networks fit the same data, each of our Qk heads has a unique target network. This,
together with stochastic minibatch and flexible nonlinear representations, means that even
small differences at initialization may become bigger as they refit to unique TD errors.
5
Bootstrapped DQN does not require that any single network Qk is initialized to the correct
policy of ?right? at every step, which would be exponentially unlikely for large chains N . For
the algorithm to be successful in this example we only require that the networks generalize in
a diverse way to the actions they have never chosen in the states they have not visited very
? < N,
often. Imagine that, in the example above, the network has made it as far as state N
?
? , 2)
but never observed the action right a = 2. As long as one head k imagines Q(N , 2) > Q(N
then TD bootstrapping can propagate this signal back to s = 1 through the target network
to drive deep exploration. The expected time for these estimates at n to propagate to
at least one head grows gracefully in n, even for relatively small K, as our experiments
show. We expand upon this intuition with a video designed to highlight how bootstrapped
DQN demonstrates deep exploration https://youtu.be/e3KuV_d0EMk. We present further
evaluation on a difficult stochastic MDP in Appendix C.
6
Arcade Learning Environment
We now evaluate our algorithm across 49 Atari games on the Arcade Learning Environment
[1]. Importantly, and unlike the experiments in Section 5, these domains are not specifically
designed to showcase our algorithm. In fact, many Atari games are structured so that
small rewards always indicate part of an optimal policy. This may be crucial for the strong
performance observed by dithering strategies2 . We find that exploration via bootstrapped
DQN produces significant gains versus ?-greedy in this setting. Bootstrapped DQN reaches
peak performance roughly similar to DQN. However, our improved exploration mean we reach
human performance on average 30% faster across all games. This translates to significantly
improved cumulative rewards through learning.
We follow the setup of [25] for our network architecture and benchmark our performance
against their algorithm. Our network structure is identical to the convolutional structure
of DQN [12] except we split 10 separate bootstrap heads after the convolutional layer
as per Figure 1a. Recently, several authors have provided architectural and algorithmic
improvements to DDQN [26, 18]. We do not compare our results to these since their advances
are orthogonal to our concern and could easily be incorporated to our bootstrapped DQN
design. Full details of our experimental set up are available in Appendix D.
6.1 Implementing bootstrapped DQN at scale
We now examine how to generate online bootstrap samples for DQN in a computationally
efficient manner. We focus on three key questions: how many heads do we need, how should
we pass gradients to the shared network and how should we bootstrap data online? We make
significant compromises in order to maintain computational cost comparable to DQN.
Figure 5a presents the cumulative reward of bootstrapped DQN on the game Breakout, for
different number of heads K. More heads leads to faster learning, but even a small number
of heads captures most of the benefits of bootstrapped DQN. We choose K = 10.
(a) Number of bootstrap heads K.
(b) Probability of data sharing p.
Figure 5: Examining the sensitivities of bootstrapped DQN.
The shared network architecture allows us to train this combined network via backpropagation.
Feeding K network heads to the shared convolutional network effectively increases the learning
rate for this portion of the network. In some games, this leads to premature and sub-optimal
convergence. We found the best final scores by normalizing the gradients by 1/K, but this
also leads to slower early learning. See Appendix D for more details.
2
By contrast, imagine that the agent received a small immediate reward for dying; dithering
strategies would be hopeless at solving this problem, just like Section 5.
6
To implement an online bootstrap we use an independent Bernoulli mask w1 ,..,wK ?Ber(p)
for each head in each episode3 . These flags are stored in the memory replay buffer and
identify which heads are trained on which data. However, when trained using a shared
minibatch the algorithm will also require an effective 1/p more iterations; this is undesirable
computationally. Surprisingly, we found the algorithm performed similarly irrespective of
p and all outperformed DQN, as shown in Figure 5b. This is strange and we discuss this
phenomenon in Appendix D. However, in light of this empirical observation for Atari, we
chose p=1 to save on minibatch passes. As a result bootstrapped DQN runs at similar
computational speed to vanilla DQN on identical hardware4 .
6.2
Efficient exploration in Atari
We find that Bootstrapped DQN drives efficient exploration in several Atari games. For
the same amount of game experience, bootstrapped DQN generally outperforms DQN with
?-greedy exploration. Figure 6 demonstrates this effect for a diverse selection of games.
Figure 6: Bootstrapped DQN drives more efficient exploration.
On games where DQN performs well, bootstrapped DQN typically performs better. Bootstrapped DQN does not reach human performance on Amidar (DQN does) but does on Beam
Rider and Battle Zone (DQN does not). To summarize this improvement in learning time we
consider the number of frames required to reach human performance. If bootstrapped DQN
reaches human performance in 1/x frames of DQN we say it has improved by x. Figure 7
shows that Bootstrapped DQN typically reaches human performance significantly faster.
Figure 7: Bootstrapped DQN reaches human performance faster than DQN.
On most games where DQN does not reach human performance, bootstrapped DQN does
not solve the problem by itself. On some challenging Atari games where deep exploration is
conjectured to be important [25] our results are not entirely successful, but still promising.
In Frostbite, bootstrapped DQN reaches the second level much faster than DQN but network
instabilities cause the performance to crash. In Montezuma?s Revenge, bootstrapped DQN
reaches the first key after 20m frames (DQN never observes a reward even after 200m
frames) but does not properly learn from this experience5 . Our results suggest that improved
exploration may help to solve these remaining games, but also highlight the importance of
other problems like network instability, reward clipping and temporally extended rewards.
3
p=0.5 is double-or-nothing bootstrap [17], p=1 is ensemble with no bootstrapping at all.
Our implementation K=10, p=1 ran with less than a 20% increase on wall-time versus DQN.
5
An improved training method, such as prioritized replay [18] may help solve this problem.
4
7
6.3 Overall performance
Bootstrapped DQN is able to learn much faster than DQN. Figure 8 shows that bootstrapped
DQN also improves upon the final score across most games. However, the real benefits to
efficient exploration mean that bootstrapped DQN outperforms DQN by orders of magnitude
in terms of the cumulative rewards through learning (Figure 9. In both figures we normalize
performance relative to a fully random policy. The most similar work to ours presents
several other approaches to improved exploration in Atari [20] they optimize for AUC-20, a
normalized version of the cumulative returns after 20m frames. According to their metric,
averaged across the 14 games they consider, we improve upon both base DQN (0.29) and
their best method (0.37) to obtain 0.62 via bootstrapped DQN. We present these results
together with results tables across all 49 games in Appendix D.4.
Figure 8: Bootstrapped DQN typically improves upon the best policy.
Figure 9: Bootstrapped DQN improves cumulative rewards by orders of magnitude.
6.4 Visualizing bootstrapped DQN
We now present some more insight to how bootstrapped DQN drives deep exploration in Atari.
In each game, although each head Q1 , .., Q10 learns a high scoring policy, the policies they
find are quite distinct. In the video https://youtu.be/Zm2KoT82O_M we show the evolution
of these policies simultaneously for several games. Although each head performs well, they
each follow a unique policy. By contrast, ?-greedy strategies are almost indistinguishable for
small values of ? and totally ineffectual for larger values. We believe that this deep exploration
is key to improved learning, since diverse experiences allow for better generalization.
Disregarding exploration, bootstrapped DQN may be beneficial as a purely exploitative
policy. We can combine all the heads into a single ensemble policy, for example by choosing
the action with the most votes across heads. This approach might have several benefits.
First, we find that the ensemble policy can often outperform any individual policy. Second,
the distribution of votes across heads to give a measure of the uncertainty in the optimal
policy. Unlike vanilla DQN, bootstrapped DQN can know what it doesn?t know. In an
application where executing a poorly-understood action is dangerous this could be crucial. In
the video https://youtu.be/0jvEcC5JvGY we visualize this ensemble policy across several
games. We find that the uncertainty in this policy is surprisingly interpretable: all heads
agree at clearly crucial decision points, but remain diverse at other less important steps.
7
Closing remarks
In this paper we present bootstrapped DQN as an algorithm for efficient reinforcement
learning in complex environments. We demonstrate that the bootstrap can produce useful
uncertainty estimates for deep neural networks. Bootstrapped DQN is computationally
tractable and also naturally scalable to massive parallel systems. We believe that, beyond
our specific implementation, randomized value functions represent a promising alternative to
dithering for exploration. Bootstrapped DQN practically combines efficient generalization
with exploration for complex nonlinear value functions.
8
References
[1] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning
environment: An evaluation platform for general agents. arXiv preprint arXiv:1207.4708, 2012.
[2] Peter J Bickel and David A Freedman. Some asymptotic theory for the bootstrap. The Annals
of Statistics, pages 1196?1217, 1981.
[3] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty
in neural networks. ICML, 2015.
[4] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement
learning. In Advances in Neural Information Processing Systems, pages 2800?2808, 2015.
[5] Bradley Efron. The jackknife, the bootstrap and other resampling plans, volume 38. SIAM,
1982.
[6] Bradley Efron and Robert J Tibshirani. An introduction to the bootstrap. CRC press, 1994.
[7] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model
uncertainty in deep learning. arXiv preprint arXiv:1506.02142, 2015.
[8] Arthur Guez, David Silver, and Peter Dayan. Efficient bayes-adaptive reinforcement learning
using sample-based search. In Advances in Neural Information Processing Systems, pages
1025?1033, 2012.
[9] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 11:1563?1600, 2010.
[10] Sham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University
College London, 2003.
[11] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances
in applied mathematics, 6(1):4?22, 1985.
[12] Volodymyr et al. Mnih. Human-level control through deep reinforcement learning. Nature,
518(7540):529?533, 2015.
[13] Ian Osband, Daniel Russo, and Benjamin Van Roy. (More) efficient reinforcement learning via
posterior sampling. In NIPS, pages 3003?3011. Curran Associates, Inc., 2013.
[14] Ian Osband and Benjamin Van Roy. Model-based reinforcement learning and the eluder
dimension. In Advances in Neural Information Processing Systems, pages 1466?1474, 2014.
[15] Ian Osband and Benjamin Van Roy. Bootstrapped thompson sampling and deep exploration.
arXiv preprint arXiv:1507.00300, 2015.
[16] Ian Osband, Benjamin Van Roy, and Zheng Wen. Generalization and exploration via randomized
value functions. arXiv preprint arXiv:1402.0635, 2014.
[17] Art B Owen, Dean Eckles, et al. Bootstrapping data arrays of arbitrary order. The Annals of
Applied Statistics, 6(3):895?927, 2012.
[18] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay.
arXiv preprint arXiv:1511.05952, 2015.
[19] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.
Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine
Learning Research, 15(1):1929?1958, 2014.
[20] Bradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement
learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
[21] Malcolm J. A. Strens. A bayesian framework for reinforcement learning. In ICML, pages
943?950, 2000.
[22] Richard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press,
March 1998.
[23] Gerald Tesauro. Temporal difference learning and td-gammon. Communications of the ACM,
38(3):58?68, 1995.
[24] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of
the evidence of two samples. Biometrika, 25(3/4):285?294, 1933.
[25] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double
q-learning. arXiv preprint arXiv:1509.06461, 2015.
[26] Ziyu Wang, Nando de Freitas, and Marc Lanctot. Dueling network architectures for deep
reinforcement learning. arXiv preprint arXiv:1511.06581, 2015.
[27] Zheng Wen and Benjamin Van Roy. Efficient exploration and value function generalization in
deterministic systems. In NIPS, pages 3021?3029, 2013.
9
| 6501 |@word exploitation:3 version:1 pieter:1 crucially:1 propagate:2 q1:3 pick:1 carry:1 initial:5 series:1 efficacy:1 selecting:1 score:2 daniel:1 tuned:1 bootstrapped:64 ours:1 rightmost:1 outperforms:2 existing:1 bradley:2 hasselt:1 com:1 freitas:1 guez:2 must:5 john:1 ronald:1 realistic:1 informative:5 enables:1 designed:5 plot:1 update:5 interpretable:1 resampling:1 greedy:5 selected:1 oldest:1 wierstra:1 direct:1 become:1 consists:1 combine:3 emma:1 manner:2 mask:2 expected:2 roughly:2 planning:8 nor:1 examine:1 inspired:1 informational:1 salakhutdinov:1 td:4 armed:1 cardinality:1 totally:1 begin:1 spain:1 provided:2 bonus:2 what:1 atari:9 substantially:2 deepmind:1 maxa:1 dying:1 gal:1 bootstrapping:3 guarantee:4 temporal:1 every:5 exactly:2 biometrika:1 demonstrates:4 control:1 unit:1 understood:2 consequence:1 sutton:1 encoding:1 black:1 chose:1 might:1 initialization:5 suggests:1 challenging:1 christoph:1 statistically:3 averaged:1 directed:3 practical:2 unique:3 russo:1 testing:1 regret:1 implement:3 backpropagation:1 bootstrap:23 episodic:1 empirical:1 thought:1 significantly:2 induce:1 gammon:1 arcade:4 suggest:1 zoubin:1 undesirable:1 selection:1 context:2 instability:2 bellemare:1 optimize:1 deterministic:5 dean:1 modifies:2 starting:1 independently:1 thompson:9 duration:2 immediately:4 estimator:1 insight:1 rule:1 importantly:2 array:1 population:1 century:1 notion:2 exploratory:1 stability:1 variation:1 annals:2 target:7 imagine:2 massive:1 exact:2 programming:1 us:3 curran:1 associate:1 roy:5 showcase:1 observed:2 levine:1 preprint:8 wang:1 capture:1 connected:1 episode:6 trade:1 highest:1 valuable:1 ran:2 observes:1 benjamin:6 environment:12 intuition:1 complexity:2 reward:21 dynamic:1 gerald:1 trained:6 solving:3 compromise:1 predictive:1 purely:1 upon:7 efficiency:1 basis:1 easily:1 joint:1 train:1 distinct:1 effective:5 london:1 artificial:1 eluder:1 choosing:1 quite:1 heuristic:2 stanford:2 larger:2 widely:1 say:2 solve:3 statistic:3 unseen:1 commit:1 noisy:1 itself:1 final:2 online:4 sequence:2 advantage:1 net:2 interaction:1 reset:1 adaptation:1 poorly:2 achieve:1 schaul:1 breakout:1 q10:1 normalize:1 sutskever:1 convergence:1 double:3 dithering:8 requirement:1 produce:5 generating:1 perfect:1 executing:1 silver:3 help:3 illustrate:1 develop:1 andrew:1 received:1 strong:1 sizable:1 judge:1 come:1 quantify:1 indicate:2 guided:1 correct:2 stochastic:2 exploration:66 human:8 nando:1 implementing:1 crc:1 require:9 dnns:1 feeding:1 abbeel:1 therm:2 wall:1 generalization:11 adjusted:1 exploring:1 extension:1 practically:1 great:1 seed:1 algorithmic:2 mapping:1 visualize:1 major:1 bickel:1 early:1 resample:1 ruslan:1 outperformed:1 visited:1 robbins:1 successfully:3 mit:1 clearly:1 gaussian:2 always:1 aim:1 rather:4 barto:1 focus:2 ytq:4 improvement:3 properly:1 bernoulli:1 indicates:1 likelihood:1 contrast:2 rigorous:1 inference:1 dependent:2 dayan:1 leung:1 unlikely:1 typically:3 bandit:8 dnn:2 expand:1 provably:3 pixel:1 arg:1 overall:1 flexible:2 priori:1 plan:5 art:4 platform:1 initialize:1 equal:1 categorizes:1 never:3 veness:1 sampling:9 koray:1 identical:2 represents:1 look:1 icml:2 future:4 tabular:4 others:1 richard:1 few:1 ortner:1 wen:2 randomly:1 simultaneously:2 individual:1 intended:1 replacement:1 maintain:5 highly:1 mnih:1 zheng:2 evaluation:2 joel:1 light:1 myopic:1 chain:5 necessary:2 experience:5 arthur:2 orthogonal:1 unless:1 tree:1 initialized:1 sacrificing:1 theoretical:2 clipping:1 cost:4 addressing:1 hundred:1 krizhevsky:1 successful:4 examining:1 stored:1 chooses:1 combined:1 st:11 fundamental:1 randomized:5 explores:1 peak:1 sensitivity:1 siam:1 off:2 rewarding:5 michael:1 together:4 ilya:1 w1:2 thesis:1 choose:2 stochastically:1 inefficient:4 return:1 toy:1 volodymyr:1 potential:2 diversity:5 de:1 ioannis:1 wk:2 inc:1 explicitly:1 dann:1 performed:1 view:1 optimistic:1 red:1 start:1 relied:1 sort:1 parallel:2 portion:1 worsen:1 bayes:1 youtu:3 convolutional:3 qk:7 efficiently:4 ensemble:6 identify:1 generalize:2 bayesian:4 raw:1 kavukcuoglu:1 none:1 rectified:1 drive:5 parallelizable:1 reach:11 sharing:1 against:2 naturally:1 recovers:1 sampled:3 gain:1 knowledge:1 efron:2 improves:5 actually:1 back:1 auer:1 supervised:2 follow:2 tom:1 improved:7 just:1 hand:1 nonlinear:4 google:2 minibatch:3 perhaps:1 gray:1 mdp:6 grows:1 dqn:83 building:1 effect:1 believe:2 normalized:1 evolution:1 jaksch:1 visualizing:1 indistinguishable:1 game:20 branching:1 bowling:1 auc:1 strens:1 leftmost:1 demonstrate:3 performs:3 resamples:1 recently:2 charles:2 common:3 rl:21 exponentially:3 volume:1 extend:2 approximates:1 theirs:1 significant:4 tuning:1 vanilla:2 mathematics:1 similarly:1 closing:1 privy:1 add:2 base:1 something:1 posterior:10 own:1 recent:1 conjectured:1 belongs:1 tesauro:1 store:1 buffer:2 scoring:1 herbert:1 accomplishes:1 maximize:2 dashed:1 signal:1 multiple:1 full:1 sham:1 reduces:1 exceeds:1 faster:7 offer:3 long:4 lai:1 bigger:1 scalable:3 regression:2 variant:2 expectation:1 metric:1 arxiv:16 iteration:1 sometimes:1 represent:2 sergey:1 hado:1 beam:1 want:1 crash:1 median:1 crucial:3 operate:1 unlike:7 pass:1 quan:1 near:1 leverage:1 split:1 enough:2 variety:1 relu:1 fit:1 architecture:6 idea:2 translates:1 blundell:1 optimism:1 osband:4 peter:3 cause:1 action:24 remark:1 deep:38 useful:2 generally:2 detailed:1 amount:1 discount:1 induces:1 generate:3 http:3 outperform:1 exploitative:1 s3:1 track:1 per:1 tibshirani:1 blue:1 diverse:5 naddaf:1 didactic:1 thereafter:1 key:3 frostbite:1 neither:1 prevent:1 kept:1 backward:1 timestep:2 asymptotically:1 roy1:1 run:2 parameterized:4 uncertainty:17 almost:2 reasonable:2 strange:1 architectural:1 decision:2 appendix:9 scaling:1 lanctot:1 comparable:1 dropout:5 layer:3 entirely:2 bound:2 montezuma:1 copied:1 ahead:2 dangerous:1 worked:1 alex:1 speed:2 nitish:1 extremely:1 graceful:1 iosband:1 yavar:1 relatively:2 jackknife:1 structured:1 according:2 expository:1 march:1 battle:1 across:13 rider:1 beneficial:1 remain:1 kakade:1 shallow:2 revenge:1 modification:1 restricted:1 taken:1 computationally:8 agree:1 remains:2 discus:1 know:2 tractable:2 antonoglou:1 available:2 apply:2 save:1 alternative:2 slower:2 thomas:1 remaining:1 completed:1 opportunity:1 ddqn:2 exploit:1 sighted:1 ghahramani:1 question:1 strategy:13 parametric:2 rt:5 cblundell:1 interacts:1 gradient:2 unable:1 separate:1 gracefully:1 bvr:1 reason:2 length:2 balance:2 difficult:1 setup:1 robert:1 potentially:3 stated:1 implementation:4 design:1 policy:22 unknown:4 perform:1 refit:1 observation:3 markov:1 benchmark:1 finite:3 daan:1 immediate:3 situation:1 extended:7 incorporated:1 head:25 hinton:1 frame:5 communication:1 arbitrary:1 david:4 nonlinearly:1 pair:1 required:1 ineffectual:1 distinction:1 learned:1 amidar:1 barcelona:1 nip:3 address:1 able:1 beyond:1 usually:1 challenge:1 summarize:1 max:3 memory:1 explanation:1 belief:1 video:3 ofu:2 suitable:2 hot:1 natural:2 quantification:1 cornebise:1 dueling:1 representing:1 improve:3 mdps:2 temporally:6 julien:1 irrespective:1 naive:1 review:1 literature:4 prior:5 relative:1 asymptotic:1 fully:3 highlight:4 allocation:1 versus:5 geoffrey:1 agent:21 consistent:2 s0:2 principle:4 story:1 balancing:2 compatible:2 hopeless:1 surprisingly:4 last:1 allow:2 ber:1 face:1 taking:1 emerge:1 van:7 benefit:5 dimension:1 transition:1 cumulative:7 doesn:1 forward:1 author:2 reinforcement:17 made:1 adaptive:2 premature:1 founded:1 far:2 approximate:7 keep:1 overfitting:1 ziyu:1 psrl:3 search:1 table:1 promising:3 learn:8 nature:1 interact:1 complex:6 domain:3 marc:2 bradly:1 main:1 spread:1 linearly:2 s2:1 freedman:1 nothing:1 yarin:1 body:1 depicts:1 sub:2 brunskill:1 stadie:1 exponential:1 replay:3 learns:4 ian:5 incentivizing:1 specific:1 disregarding:1 concern:1 normalizing:1 intractable:5 evidence:1 effectively:2 importance:1 phd:1 magnitude:2 imago:1 horizon:2 mildly:1 depicted:1 simply:1 explore:5 tze:1 scalar:1 relies:2 acm:1 towards:1 prioritized:2 shared:7 owen:1 specifically:1 except:3 uniformly:2 flag:2 conservative:1 called:1 pas:2 e:1 experimental:2 vote:2 indicating:1 select:2 zone:1 college:1 alexander:1 evaluate:1 malcolm:1 phenomenon:1 srivastava:1 |
6,083 | 6,502 | SURGE: Surface Regularized Geometry Estimation
from a Single Image
Peng Wang1 Xiaohui Shen2 Bryan Russell2 Scott Cohen2 Brian Price2 Alan Yuille3
1
University of California, Los Angeles
2
Adobe Research
3
Johns Hopkins University
Abstract
This paper introduces an approach to regularize 2.5D surface normal and depth
predictions at each pixel given a single input image. The approach infers and
reasons about the underlying 3D planar surfaces depicted in the image to snap
predicted normals and depths to inferred planar surfaces, all while maintaining
fine detail within objects. Our approach comprises two components: (i) a fourstream convolutional neural network (CNN) where depths, surface normals, and
likelihoods of planar region and planar boundary are predicted at each pixel,
followed by (ii) a dense conditional random field (DCRF) that integrates the four
predictions such that the normals and depths are compatible with each other and
regularized by the planar region and planar boundary information. The DCRF is
formulated such that gradients can be passed to the surface normal and depth CNNs
via backpropagation. In addition, we propose new planar-wise metrics to evaluate
geometry consistency within planar surfaces, which are more tightly related to
dependent 3D editing applications. We show that our regularization yields a 30%
relative improvement in planar consistency on the NYU v2 dataset [24].
1
Introduction
Recent efforts to estimate the 2.5D layout of a depicted scene from a single image, such as per-pixel
depths and surface normals, have yielded high-quality outputs respecting both the global scene layout
and fine object detail [2, 6, 7, 29]. Upon closer inspection, however, the predicted depths and normals
may fail to be consistent with the underlying surface geometry. For example, consider the depth and
normal predictions from the contemporary approach of Eigen and Fergus [6] shown in Figure 1 (b)
(Before DCRF). Notice the significant distortion in the predicted depth corresponding to the depicted
planar surfaces, such as the back wall and cabinet. We argue that such distortion arises from the fact
that the 2.5D predictions (i) are made independently per pixel from appearance information alone,
and (ii) do not explicitly take into account the underlying surface geometry. When 3D geometry has
been used, e.g., [29], it often consists of a boxy room layout constraint, which may be too coarse
and fail to account for local planar regions that do not adhere to the box constraint. Moreover, when
multiple 2.5D predictions are made (e.g., depth and normals), they are not explicitly enforced to
agree with each other.
To overcome the above issues, we introduce an approach to identify depicted 3D planar regions in the
image along with their spatial extent, and to leverage such planar regions to regularize the depth and
surface normal outputs. We formulate our approach as a four-stream convolutional neural network
(CNN), followed by a dense conditional random field (DCRF). The four-stream CNN independently
predicts at each pixel the surface normal, depth, and likelihoods of planar region and planar boundary.
The four cues are integrated into a DCRF, which encourages the output depths and normals to align
with the inferred 3D planar surfaces while maintaining fine detail within objects. Furthermore, the
output depths and normals are explicitly encouraged to agree with each other.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Framework of SURGE system. (a) We induce surface regularization in geometry estimation
though DCRF, and enable joint learning with CNN, which largely improves the visual quality (b).
We show that our DCRF is differentiable with respect to depth and surface normals, and allows
back-propagation to the depth and normal CNNs during training. We demonstrate that the proposed
approach shows relative improvement over the base CNNs for both depth and surface normal
prediction on the NYU v2 dataset using the standard evaluation criteria, and is significantly better
when evaluated using our proposed plane-wise criteria.
2
Related work
From a single image, traditional geometry estimation approaches rely on extracting visual primitives
such as vanishing points and lines [10] or abstract the scenes with major plane and box representations [22, 26]. Those methods can only obtain sparse geometry representations, and some of them
require certain assumptions (e.g. Manhattan world).
With the advance of deep neural networks and their strong feature representation, dense geometry,
i.e., pixel-wise depth and normal maps, can be readily estimated from a single image [7]. Long-range
context and semantic cues are also incorporated in later works to refine the dense prediction by
combining the networks with conditional random fields (CRF) [19, 20, 28, 29]. Most recently,
Eigen and Fergus [6] further integrate depth and normal estimation into a large multi-scale network
structure, which significantly improves the geometry estimation accuracy. Nevertheless, the output
of the networks still lacks regularization over planar surfaces due to the adoption of pixel-wise
loss functions during network training, resulting in unsatisfactory experience in 3D image editing
applications.
For inducing non-local regularization, DCRF has been commonly used in various computer vision
problems such as semantic segmentation [5, 32], optical flow [16] and stereo [3]. However, the
features for the affinity term are mostly simple ones such as color and location. In contrast, we have
designed a unique planar surface affinity term and a novel compatibility term to enable 3D planar
regularization over geometry estimation.
Finally, there is also a rich literature in 3D reconstruction from RGBD images [8, 12, 24, 25, 30],
where planar surfaces are usually inferred. However, they all assume that the depth data have been
acquired. To the best of our knowledge, we are the first to explore using planar surface information to
regularize dense geometry estimation by only using the information of a single RGB image.
3
Overview
Fig. 1 illustrates our approach. An input image is passed through a four-stream convolutional neural
network (CNN) that predicts at each pixel a surface normal, depth value, and whether the pixel belongs
to a planar surface or edge (i.e., edge separating different planar surfaces or semantic regions), along
with their prediction confidences. We build on existing CNNs [6, 31] to produce the four maps.
While the CNNs for surface normals and depths produce high-fidelity outputs, they do not explicitly
enforce their predictions to agree with depicted planar regions. To address this, we propose a fullyconnected dense conditional random field (DCRF) that reasons over the CNN outputs to regularize
the surface normals and depths. The DCRF jointly aligns the surface normals and depths to individual
planar surfaces derived from the edge and planar surface maps, all while preserving fine detail within
objects. Our DCRF leverages the advantages of previous fully-connected CRFs [15] in terms of both
its non-local connectivity, which allows propagation of information across an entire planar surface,
and efficiency during inference. We present our DCRF formulation in Section 4, followed by our
algorithm for joint learning and inference within a CNN in Section 5.
2
Image
3D
Depth
Normal
3D surface
Figure 2: The orthogonal compatibility constraint inside the DCRF. We recover 3d points from the
depth map and require the difference vector to be perpendicular to the normal predictions.
4
DCRF for Surface Regularized Geometry Estimation
In this section, we present our DCRF that incorporates plane and edge predictions for depth and
surface normal regularization. Specifically, the field of variables we optimize are depths, D =
K
T
{di }K
i=1 , where K is number of the pixels, and normals, N = {ni }i=1 , where ni = [nix , niy , niz ]
indicates the 3D normal direction.
In addition, as stated in the overview (Sec. 3), we have four types of information from the CNN
K
predictions, namely a predicted normal map No = {noi }K
i=1 , a depth map Do = {di }i=1 , a plane
probability map Po and edge predictions Eo . Following the general form of DCRF [16], our problem
can be formulated as,
X
X
min
?u (ni , di |No , Do ) + ?
?r (ni , nj , di , dj |Po , Eo ) with kni k2 = 1
(1)
N,D
i
i,j,i6=j
where ?u (?) is a unary term encouraging the optimized surface normals ni and depths di to be close
to the outputs noi and doi from the networks. ?r (?, ?) is a pairwise fully connected regularization term
depending on the information from the plane map Po and edge map Eo , where we seek to encourage
consistency of surface normals and depths within planar regions with the underlying depicted 3D
planar surfaces. Also, we constrain the normal predictions to have unit length. Specifically, the
definition of unary and pairwise in our model are presented as follows.
4.1 Unary terms
Motivated by Monte Carlo dropout [27], we notice that when forward propagating multiple times
with dropout, the CNN predictions have different variations across different pixels, indicating the
prediction uncertainty. Based on the prediction variance from the normal and depth networks, we
are able to obtain pixel-wise confidence values win and wid for normal and depth predictions. We
leverage such information to DCRF inference by trusting the predictions with higher confidence
while regularizing more over ones with low confidence. By integrating the confidence values, our
unary term is defined as,
1
1
(2)
?u (ni , di |No , Do ) = win ?n (ni |no ) + wid ?d (di |do ),
2
2
where ?n (ni |no ) = 1 ? ni ? noi is the cosine distance between the input and output surface normals,
2
and ?d (di |do ) = (di ? doi ) is the is the squared difference between input and output depth.
4.2 Pairwise term for regularization.
We follow the convention of DCRF with Gibbs energy [17] for pairwise designing, but also bring in
the confidence value of each pixel as described in Sec. 4.1. Formally, it is defined as,
n
d
?r (ni , nj , di , dj |Po , Eo ) = wi,j
?n (ni , nj ) + wi,j
?d (di , dj , ni , nj ) Ai,j (Po , Eo ),
1
1
n
d
= (wid + wjd )
(3)
where, wi,j
= (win + wjn ), wi,j
2
2
Here, Ai,j is a pairwise planar affinity indicating whether pixel locations i and j belong to the same
planar surface derived from the inferred edge and planar surface maps. ?n () and ?d () regularize
the output surface normals and depths to be aligned inside the underlying 3D plane. Here, we use
simplified notations, i.e. Ai,j , ?n () and ?d () for the corresponding terms.
For the compatibility ?n () of surface normals, we use the same function as ?n () in Eqn. (2), which
measures the cosine distance between ni and nj . For depths, we design an orthogonal compatibility
function ?d () which encourages the normals and depths of each adjacent pixel pair to be consistent
and aligned within a 3D planar surface. Next we define ?d () and Ai,j .
3
Image
Plane
NCut eigenvectors
Edge
Pairwise planar affinity
Figure 3: Pairwise surface affinity from the plane and edge predictions with computed Ncut features.
We highlight the computed affinity w.r.t. pixel i (red dot).
Orthogonal compatibility: In principle, when two pixels fall in the same plane, the vector connecting their corresponding 3D world coordinates should be perpendicular to their normal directions,
as illustrated in Fig. 2. Formally, this orthogonality constraint can be formulated as,
1
1
2
2
?d (di , dj , ni , nj ) = (ni ? (xi ? xj )) + (nj ? (xi ? xj )) , with xi = di K?1 pi . (4)
2
2
Here xi is the 3D world coordinate back projected by 2D pixel coordinate pi (written in homogeneous
coordinates), given the camera calibration matrix K and depth value di . This compatibility encourages
consistency between depth and normals.
Pairwise planar affinity: As noted in Eqn. (3), the planar affinity is used to determine whether
pixels i and j belong to the same planar surface from the information of plane and edge. Here Po
helps to check whether two pixels are both inside planar regions, and Eo helps to determine whether
the two pixels belong to the same planar surface. Here, for efficiency, we chose the form of Gaussian
bilateral affinity to represent such information since it has been successfully adopted by many
previous works with efficient inference, e.g. in discrete label space for semantic segmentation [5]
or in continuous label space for edge-awared smoothing [3, 16]. Specifically, following the form of
bilateral filters, our planar surface affinity is defined as,
Ai,j (Po , Eo ) = pi pj (?1 ? (fi , fj ; ?? ) ? (ci , cj ; ?? ) + ?2 ? (ci , cj ; ?? )) ,
(5)
1
2
where ?(zi , zj ; ?) = exp ? 2?2 kzi ? zj k is a Gaussian RBF kernel. pi is the predicted value
from the planar map Po at pixel i. pi pj indicates that the regularization is activated when both i, j
are inside planar regions with high probability. fi is the appearance feature derived from the edge
map Eo , ci is the 2D coordinate of pixel i on image. ?1 , ?2 , ?? , ?? , ?? are parameters.
To transform the pairwise similarity derived from the edge map to the feature representation f for
efficient computing, we borrow the idea from the Normalized Cut (NCut) for segmentation [14, 23],
where we can first generate an affinity matrix between pixels using intervening contour [23], and
perform normalized cut. We select the top 6 resultant eigenvectors as our feature f . . A transformation
from edge to the planar affinity using the eigenvectors is shown in Fig. 3. As can be seen from the
affinity map, the NCut features are effective to determine whether two pixels lie in the same planar
surface where the regularization can be performed.
5
Optimization
Given the formulation in Sec. 4, we first discuss the fast inference implementation for DCRF, and
then present the algorithm of joint training with CNNs through back-propagation.
5.1 Inference
To optimize the objective function defined in Eqn.(1), we use mean-field approximation for fast
inference as used in the optimization of DCRF [15]. In addition, we chose to use coordinate descent
to sequentially optimize normals and depth. When optimizing normals, for simplicity and efficiency,
we do not consider the term of ?d () in Eqn.(3), yielding the updating for pixel i at iteration t as,
1
?X
(t)
(t?1)
(t)
(t)
(t)
ni ? win noi +
wjn nj
Ai,j , ni ? ni /kni k2 ,
(6)
j,j6=i
2
2
which is equivalent to first performing a dense bilateral filtering [4] with our pairwise planar affinity
term Ai,j for the predicted normal map, and then applying L2 normalization.
Given the optimized normal information, we further optimize depth values. Similar to normals, after
performing mean-field approximation, the inferred updating equation for depth at iteration t is,
X
1 d o
(t)
(t?1)
di ?
wi di + ?(ni ? pi )
Ai,j wjd dj
(nj ? pj )
(7)
j,j6=i
?i
4
P
where ?i = wid + ?(ni ? pi ) pi ? j,j6=i Ai,j wjd nj , Since the graph is densely connected, previous
work [16] indicates that only a few (<10) iterations are need to achieve reasonable performance. In
practice we found that 5 iterations for normal inference and 2 iterations for depth inference yielded
reasonable results.
5.2 Joint training of CNN and DCRF
We further implement the DCRF inference as a trainable layer as in [32] by considering the inference
as feedforward process, to enable joint training together with the normal and depth neural networks.
This makes the planar surface information able to be back-propagated to the neural networks and
further refine their output. We describe the gradients back-propagated to the two networks respectively.
Back-propagation to the normal network. Suppose the gradient of normal passed from the upper
layer after DCRF for pixel i is ?f (ni ), which is a 3x1 vector. We now back-propagate it first through
the L2 normalization using the equation of ?L2 (ni ) = (I/kni k ? ni nTi /kni k3 )?f (ni ), and then
back-propagate through the mean-field approximation in Eqn. (6) as,
?L(N)
?L2 (ni ) ? X
=
+
Aj,i ?L2 (nj ),
(8)
j,j6=i
?ni
2
2
where L(N) is the loss from normal predictions after DCRF, I is the identity matrix.
Back-propagation to the depth network. Similarly for depth, suppose the gradient from the upper
layer is ?f (di ), the depth gradient for back-propagation through Eqn. 7 can be inferred as,
X
?L(D)
1
1
= ?f (di ) + ?(ni ? pi )
Aj,i (nj ? pj )?f (dj )
(9)
j,j6
=
i
?di
?i
?j
where L(D) is the loss from depth predictions after DCRF.
Note that during back propagation for both surface normals and depths we drop the confidences w
since using it during training will make the process very complicated and inefficient. We adopt the
same surface normal and depth loss function as in [6] during joint training. It is possible to also back
propagate the gradients of the depth values to the normal network via the surface normal and depth
compatibility in Eqn. (4). However, this involves the depth values from all the pixels within the same
plane, which may be intractable and cause difficulty during joint learning. We therefore chose not to
back propagate through the compatibility in our current implementation and leave it to future work.
6
Implementation details for DCRF
To predict the input surface normals and depths, we build on the publicly-available implementation
from Eigen and Fergus [6], which is at or near state of the art for both tasks. We compute prediction
confidences for the surface normals and depths using Monte Carlo dropout [27]. Specifically, we
forward propagate through the network 10 times with dropout during testing, and compute the
prediction variance vi at each pixel. The predictions with larger variance vi are considered less stable,
so we set the confidence as wi? = exp(?vi /?? 2 ). We empirically set ?n = 0.1 for normals prediction
and ?d = 0.15 for depth prediction to produce reasonable confidence values.
Specifically, for prediction the plane map Po , we adopt a semantic segmentation network structure
similar to the Deeplab [5] network but with multi-scale output as the FCN [21]. The training is
formulated as a pixel-wise two-class classification problem (planar vs. non-planar). The output of the
network is hereby a plane probability map Po where pi at pixel i indicates the probability of pixel i
belonging to a planar surface. The edge map Eo indicates the plane boundaries. During training, the
ground-truth edges are extracted from the corresponding ground-truth depth and normal maps, and
refined by semantic annotations when available (see Fig.4 for an example). We then adopt the recent
Holistic-nested Edge Detector (HED) network [31] for training. In addition, we augment the network
by adding predicted depth and normal maps as another 4-channel input to improve recall, which is
very important for our regularization since missing edges could mistakenly merge two planes and
propagate errors during the message passing.
For the surface bilateral filter in Eqn. (5), we set the parameters ?? = 0.1, ?? = 50, ?? = 3, ?1 =
1, ?2 = 0.3, and set the ? = 2 in Eqn.(1) through a grid search over a validation set from [9]. The
four types of inputs to the DCRF are aligned and resized to 294x218 by matching the network output
of [6]. During the joint training of DCRF and CNNs, we fix the parameters and fine-tune the network
5
Image
Plane
Edge
Normal
Depth
Figure 4: Four types of ground-truth from the NYU dataset that are used in our algorithm.
based on the weights pre-trained from [6], with the 795 training images, and use the same loss
functions and learning rates as in their depth and normal networks respectively.
Due to limited space, the detailed edge and plane network structures, the learning and inference times
and visualization of confidence values are presented in the supplementary materials.
7
Experiments
We perform all our experiments on the NYU v2 dataset [24]. It contains 1449 images with size of
640?480, which is split to 795 training images and 654 testing images. Each image has an aligned
ground-truth depth map and a manually annotated semantic category map. In additional, we use the
ground-truth surface normals generated by [18] from depth maps. We further use the official NYU
toolbox1 to extract planar surfaces from the ground-truth depth and refine them with the semantic
annotations, from which a binary ground-truth plane map and an edge map are obtained. The details
of generating plane and edge ground-truth are elaborated in supplementary materials. Fig. 4 shows
the produced four types of ground-truth maps for our learning and evaluation.
We implemented all our algorithms based on Caffe [13], including DCRF inference and learning,
which are adapted from the implementation in [1, 32].
Evaluation setup. In the evaluation, we first compare the normals and depths generated by different
baselines and components over the ground truth planar regions, since these are the regions where
we are trying to improve, which are most important for 3D editing applications. We evaluated over
the valid 561x427 area following the convention in [18, 20]. We also perform evaluation over the
ground truth edge area showing that our results preserve better geometry details. Finally, we show
the improvement achieved by our algorithm over the entire image region.
We compare our results against the recent work Eigen et.al [6] since it is or is near state-of-the-art
for both depth and normal. In practice, we use their published results and models for comparison.
In addition, we implemented a baseline method for hard planar regularization, in which the planar
surfaces are explicitly extracted from the network predictions. The normal and depth values within
each plane are then used to fit the plane parameters, from which the regularized normal and depth
values are obtained. We refer to this baseline as "Post-Proc.". For normal prediction, we implemented
another baseline in which a basic Bilateral filter based on the RGB image is used to smooth the
normal map.
In terms of the evaluation criteria, we first adopt the pixel-wise evaluation criteria commonly used
by previous works [6, 28]. However, as mentioned in [11], such metrics mainly evaluate pixel-wise
depth and normal offsets, but do not well reflect the quality of reconstructed structures over edges
and planar surfaces. Thus, we further propose plane-wise metrics that evaluate the consistency of
the predictions inside a ground truth planar region. In the following, we first present evaluations for
normal prediction, and then report the results of depth estimation.
Surface normal criteria. For pixel-wise evaluation, we use the same metrics used in [6].
P
For plane-wise evaluation, given a set of ground truth planar regions {P?j }N
j=1 , we propose two
metrics to evaluate the consistency of normal prediction within the planar regions,
1. Degree
P variation
P (var.): It measures the overall planarity inside a plane, and defined as,
1
1
?(ni , nj ), where ?(ni , nj ) = acos(ni ? nj ) which is the degree differ?
j |Pj |
i?P?
NP
j
ence between two normals, nj is the normal mean of the prediction inside P?j .
2. First-order degree gradient (grad.): It measures thePsmoothness
P of the normal transition inside
a planar region. Formally, it is defined as, N1P j |P1? | i?P? (?(ni , nhi ) + ?(ni , nvi )),
j
j
where nhi , nvi are normals of right and bottom neighbor pixels of i.
1
http://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html
6
Pixel-wise (Over planar region)
Lower the better
Higher the better
Plane-wise
Lower the better
Evaluation over the planar regions
mean
median
11.25?
22.5?
Eigen-VGG [6]
14.5425
8.9735
59.00
RGB-Bilateral
Post-Proc.
14.4665
14.8154
8.9439
8.6971
59.12
59.85
Eigen-VGG (JT)
DCRF
DCRF (JT)
DCRF-conf
DCRF-conf (JT)
14.4978
14.1934
14.2055
13.9732
13.9763
8.9371
8.8697
8.8696
8.5320
8.2535
Oracle
13.5804
Method
30?
var.
grad.
80.85 87.38
9.1534
1.1112
80.86 87.41
80.52 86.67
8.6454
7.2753
1.1735
0.9882
59.12
59.27
59.34
60.89
62.20
80.90
81.08
81.13
81.87
82.35
87.43
87.77
87.78
88.09
88.08
8.9601
6.9688
6.8866,
6.8212
6.3939
1.0795
0.7441
0.7302
0.7407
0.6858
8.1671
62.83
83.16 88.85
4.9199
0.5923
Eigen-VGG [6]
DCRF-conf (JT)
23.4141 18.3288
23.4694 17.6804
30.90
33.63
58.91 71.43
59.53 71.03
Edge
Eigen-VGG [6]
DCRF-conf (JT)
20.9322
20.6093
44.43
47.29
67.25 75.83
68.92 76.64
Image
13.2214
12.1704
Table 1: Normal accuracy comparison over the NYU v2 dataset. We compare our final results
(DCRF-conf (JT)) against various baselines over ground truth planar regions at upper part, where JT
means joint training CNN and DCRF as presented in Sec. 5.2. We list additional comparison over the
edge and full image region at lower part.
Evaluation on surface normal estimation. In upper part of Tab. 1, we show the comparison
results. The first line, i.e. Eigen-VGG, is the result from [6] with VGG net, which serves as our
baseline. The simple RGB-Bilateral filtering can only slightly improve the network output since
it does not contain any planar surface information during the smoothing. The hard regularization
over planar regions ("Post-Proc.") can improve the plane-wise consistency since hard constraints are
enforced in each plane, but it also brings strong artifacts and suffers significant decrease in pixel-wise
accuracy metrics. Our "DCRF" can bring improvement on both pixel-wise and plane-wise metrics,
while integrating network prediction confidence further makes the DCRF inference achieve much
better results. Specifically, using "DCRF-conf", the plane-wise error metric var. drops from 9.15
produced by the network to 6.8. It demonstrates that our non-local planar surface regularization does
help the predictions especially for the consistency inside planar regions.
We also show the benefits from the joint training of DCRF and CNN. "Eigen-VGG (JT)" denotes
the output of the CNN after joint training, which shows better results than the original network. It
indicates that regularization using DCRF for training also improves the network. By using the joint
trained CNN and DCRF ("DCRF (JT)"), we observe additional improvement over that from "DCRF".
Finally, by combining the confidence from joint trained CNN, our final outputs ("DCRF-conf (JT)")
achieve the best results over all the compared methods. In addition, we also use ground-truth plane
and edge map to regularize the normal output("Oracle") to get an upper bound when the planar surface
information is perfect. We can see our final results are in fact quite close to "Oracle", demonstrating
the high quality of our plane and edge prediction.
In the bottom part of Tab. 1, we show the evaluation over edge areas (rows marked by "Edge") as well
as on the entire images (marked by "Image"). The edge areas are obtained by dilating the ground
truth edges with 10 pixels. Compared with the baseline, although our results slightly drop in "mean"
and 30? , they are much better in "median" and 11.25? . It shows by preserving edge information, our
geometry have more accurate predictions around boundaries. When evaluated over the entire images,
our results outperforms the baseline in all the metrics, showing that our algorithm not only largely
improves the prediction in planar regions, but also keeps the good predictions within non-planar
regions.
Depth criteria. When evaluating depths, similarly, we also firstly adopt the traditional pixel-wise
depth metrics that are defined in [7, 28]. We refer readers to the original papers for detailed definition
due to limited space. We then also propose plane-wise metrics. Specifically, we generate the normals
from the predicted depths using the NYU toolbox [24], and evaluate the degree variation (var.) of the
generated normals within each plane.
7
Pixel-wise
Lower the better (LTB)
Plane-wise
LTB
Higher the better
Evaluation over the planar regions
Rel
Rel(sqr)
log10
RMSElin
RMSElog
1.25
1.252
1.253
var.
Eigen-VGG [6]
0.1441
0.0892
0.0635
0.5083
0.1968
78.7055
96.3516
99.3291
16.4460
Post-Proc.
0.1470
0.0937
0.0644
0.5200
0.2003
78.2290
96.1145
99.2258
11.1489
Eigen-VGG(JT)
DCRF
DCRF(JT)
DCRF-conf
DCRF-conf(JT)
0.1427
0.1438
0.1424
0.1437
0.1423
0.0881
0.0893
0.0874
0.0881
0.0874
0.0612
0.0634
0.0610
0.0631
0.0610
0.4900
0.5100
0.4873
0.5027
0.4874
0.1930
0.1965
0.1920
0.1957
0.1920
80.1163
78.7311
80.1800
78.9070
80.2453
96.4421
96.3739
96.5481
96.4336
96.5612
99.3029
99.3321
99.3326
99.3395
99.3229
17.5251
12.0424
10.5836
12.0420
10.5746
Oracle
0.1431
0.0879
0.0629
0.5043
0.1950
78.9777
96.4297
99.3605
8.0522
Eigen-VGG [6]
DCRF-conf(JT)
0.1645
0.1624
0.1369
0.1328
0.0735
0.0707
0.7268
0.6965
0.2275
0.2214
72.9491
74.7198
94.2890
94.6927
98.6539
98.7048
Edge
Eigen-VGG [6]
DCRF-conf(JT)
0.1583
0.1555
0.1213
0.1179
0.0671
0.0672
0.6388
0.6430
0.2145
0.2139
77.0536
76.8466
95.0456
95.0946
98.8140
98.8668
Image
Method
Table 2: Depth accuracy comparison over the NYU v2 dataset.
Evaluation on depth prediction. Similarly, we first report the results on planar regions in the
upper part of Tab. 2, and then present the evaluation on edge areas and over the entire image. We can
observe similar trends of different methods as in normal evaluation, demonstrating the effectiveness
of the proposed approach in both tasks.
Qualitative results. We also visually show an example to illustrate the improvements brought by
our method. In Fig. 5, we visualize the predictions in 3D space in which the reconstructed strcture
can be better observed. As can be seen, the results from network output [6] have lots of distortions
in planar surfaces, and the transition is blurred accross plane boundaries, yielding non-satisfactory
quality. Our results largely allievate such problems by incorporating plane and edge regularization,
yielding visually much more satisfied results. Due to space limitation, we include more examples in
the supplementary materials.
8
Conclusion
In this paper, we introduce SURGE, which is a system that induces surface regularization to depth
and normal estimation from a single image. Specifically, we formulate the problem as DCRF which
embeds surface affinity and depth normal compatibility into the regularization. Last but not the least,
our DCRF is enabled to be jointly trained with CNN. From our experiments, we achieve promising
results and show such regularization largely improves the quality of estimated depth and surface
normal over planar regions, which is important for 3D editing applications.
Acknowledgment. This work is supported by the NSF Expedition for Visual Cortex on Silicon NSF
award CCF-1317376 and the Army Research Office ARO 62250-CS.
Image
Normal [6]
Eigen et.al [6]
Ours normal
Ours
Normal GT
Depth [6]
Ground Truth
Ours depth
Depth GT.
Figure 5: Visual comparison between network output from Eigen et.al [6] and our results in 3D view.
We project the RGB and normal color to the 3D points (Best view in color).
8
References
[1] A. Adams, J. Baek, and M. A. Davis. Fast high-dimensional filtering using the permutohedral lattice. In
Computer Graphics Forum, volume 29, pages 753?762. Wiley Online Library, 2010.
[2] A. Bansal, B. Russell, and A. Gupta. Marr revisited: 2d-3d alignment via surface normal prediction. In
CVPR, 2016.
[3] J. T. Barron, A. Adams, Y. Shih, and C. Hern?ndez. Fast bilateral-space stereo for synthetic defocus.
CVPR, 2015.
[4] J. T. Barron and B. Poole. The fast bilateral solver. CoRR, 2015.
[5] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with
deep convolutional nets and fully connected crfs. ICLR, 2015.
[6] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale
convolutional architecture. In ICCV, 2015.
[7] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep
network. In NIPS. 2014.
[8] R. Guo and D. Hoiem. Support surface prediction in indoor scenes. In ICCV, 2013.
[9] S. Gupta, R. Girshick, P. Arbelaez, and J. Malik. Learning rich features from RGB-D images for object
detection and segmentation. In ECCV. 2014.
[10] D. Hoiem, A. A. Efros, and M. Hebert. Recovering surface layout from an image. In ICCV, 2007.
[11] K. Honauer, L. Maier-Hein, and D. Kondermann. The hci stereo metrics: Geometry-aware performance
analysis of stereo algorithms. In ICCV, 2015.
[12] S. Ikehata, H. Yang, and Y. Furukawa. Structured indoor modeling. In ICCV, 2015.
[13] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[14] I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. ICLR, 2016.
[15] P. Kr?henb?hl and V. Koltun. Efficient inference in fully connected crfs with gaussian edge potentials.
NIPS, 2012.
[16] P. Kr?henb?hl and V. Koltun. Efficient nonlocal regularization for optical flow. In ECCV, 2012.
[17] P. Kr?henb?hl and V. Koltun. Parameter learning and convergent inference for dense random fields. In
ICML, 2013.
[18] L. Ladicky, B. Zeisl, and M. Pollefeys. Discriminatively trained dense surface normal estimation. In D. J.
Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, editors, ECCV, 2014.
[19] B. Li, C. Shen, Y. Dai, A. van den Hengel, and M. He. Depth and surface normal estimation from
monocular images using regression on deep features and hierarchical crfs. In CVPR, June 2015.
[20] F. Liu, C. Shen, and G. Lin. Deep convolutional neural fields for depth estimation from a single image. In
CVPR, June 2015.
[21] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR,
pages 3431?3440, 2015.
[22] A. G. Schwing, S. Fidler, M. Pollefeys, and R. Urtasun. Box in the box: Joint 3d layout and object
reasoning from single images. In ICCV, pages 353?360. IEEE Computer Society, 2013.
[23] J. Shi and J. Malik. Normalized cuts and image segmentation. PAMI, 22(8):888?905, 2000.
[24] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference from rgbd
images. In ECCV (5), pages 746?760, 2012.
[25] S. Song, S. Lichtenberg, and J. Xiao. SUN RGB-D: A RGB-D scene understanding benchmark suite. In
CVPR, 2015.
[26] F. Srajer, A. G. Schwing, M. Pollefeys, and T. Pajdla. Match box: Indoor image matching via box-like
scene estimation. In 2nd International Conference on 3D Vision, 3DV 2014, Tokyo, Japan, December 8-11,
2014, Volume 1, 2014.
[27] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1), 2014.
[28] P. Wang, X. Shen, Z. Lin, S. Cohen, B. L. Price, and A. L. Yuille. Towards unified depth and semantic
prediction from a single image. In CVPR, 2015.
[29] X. Wang, D. Fouhey, and A. Gupta. Designing deep networks for surface normal estimation. In CVPR,
2015.
[30] J. Xiao and Y. Furukawa. Reconstructing the world?s museums. In ECCV, 2012.
[31] S. Xie and Z. Tu. Holistically-nested edge detection. ICCV, 2015.
[32] S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. Torr. Conditional
random fields as recurrent neural networks. In International Conference on Computer Vision (ICCV), 2015.
9
| 6502 |@word kohli:1 cnn:16 kokkinos:2 paredes:1 nd:1 seek:1 propagate:6 rgb:8 ndez:1 contains:1 liu:1 hoiem:3 ours:3 romera:1 outperforms:1 existing:1 lichtenberg:1 current:1 guadarrama:1 written:1 readily:1 john:1 designed:1 drop:3 v:1 alone:1 cue:2 inspection:1 plane:36 vanishing:1 coarse:1 revisited:1 location:2 firstly:1 along:2 koltun:3 qualitative:1 consists:1 hci:1 fullyconnected:1 inside:9 introduce:2 pairwise:10 acquired:1 peng:1 p1:1 surge:3 nhi:2 multi:4 salakhutdinov:1 encouraging:1 accross:1 considering:1 solver:1 spain:1 project:1 underlying:5 moreover:1 notation:1 unified:1 transformation:1 nj:16 suite:1 k2:2 demonstrates:1 unit:1 before:1 local:4 planarity:1 niz:1 merge:1 pami:1 chose:3 limited:2 range:1 adoption:1 perpendicular:2 unique:1 camera:1 acknowledgment:1 testing:2 practice:2 implement:1 backpropagation:1 area:5 significantly:2 matching:2 confidence:13 induce:1 integrating:2 pre:1 get:1 close:2 kni:4 context:1 applying:1 optimize:4 equivalent:1 xiaohui:1 map:29 missing:1 crfs:4 shi:1 layout:5 primitive:1 dilating:1 independently:2 formulate:2 shen:3 simplicity:1 toolbox1:1 borrow:1 regularize:6 marr:1 enabled:1 embedding:1 variation:3 coordinate:6 suppose:2 homogeneous:1 designing:2 trend:1 updating:2 cut:3 predicts:2 bottom:2 observed:1 preprint:1 wang:2 region:29 connected:5 sun:1 decrease:1 contemporary:1 russell:1 noi:4 mentioned:1 respecting:1 schiele:1 trained:5 yuille:2 upon:1 efficiency:3 po:10 joint:14 various:2 fast:6 effective:1 describe:1 monte:2 doi:2 refined:1 caffe:2 quite:1 larger:1 supplementary:3 cvpr:8 snap:1 distortion:3 tuytelaars:1 jointly:2 transform:1 final:3 online:1 advantage:1 differentiable:1 karayev:1 net:2 propose:5 reconstruction:1 aro:1 tu:1 aligned:4 combining:2 holistic:1 achieve:4 intervening:1 inducing:1 los:1 sutskever:1 darrell:2 produce:3 generating:1 perfect:1 leave:1 adam:2 object:6 help:3 depending:1 illustrate:1 recurrent:1 propagating:1 strong:2 implemented:3 predicted:9 involves:1 c:2 recovering:1 convention:2 differ:1 direction:2 annotated:1 tokyo:1 cnns:7 filter:3 wid:4 enable:3 material:3 require:2 fix:1 wall:1 brian:1 wjn:2 around:1 considered:1 ground:17 normal:93 exp:2 visually:2 k3:1 predict:1 visualize:1 major:1 efros:1 adopt:5 estimation:16 proc:4 integrates:1 label:3 successfully:1 brought:1 gaussian:3 resized:1 office:1 derived:4 june:2 improvement:6 unsatisfactory:1 likelihood:2 indicates:6 check:1 mainly:1 contrast:1 baseline:8 wang1:1 inference:17 dependent:1 unary:4 integrated:1 entire:5 compatibility:9 pixel:42 issue:1 fidelity:1 classification:1 augment:1 overall:1 html:1 spatial:1 smoothing:2 art:2 field:11 aware:1 encouraged:1 manually:1 icml:1 fcn:1 future:1 report:2 np:1 fouhey:1 few:1 preserve:1 tightly:1 densely:1 individual:1 museum:1 murphy:1 geometry:16 detection:3 message:1 zheng:1 evaluation:17 alignment:1 hed:1 introduces:1 yielding:3 activated:1 accurate:1 edge:38 closer:1 encourage:1 experience:1 orthogonal:3 hein:1 girshick:2 modeling:1 ence:1 papandreou:1 lattice:1 krizhevsky:1 too:1 graphic:1 synthetic:1 international:2 vineet:1 connecting:1 hopkins:1 together:1 connectivity:1 squared:1 reflect:1 satisfied:1 huang:1 conf:11 inefficient:1 li:1 japan:1 account:2 potential:1 sec:4 blurred:1 explicitly:5 vi:3 stream:3 later:1 bilateral:9 performed:1 lot:1 view:2 tab:3 red:1 recover:1 complicated:1 annotation:2 jia:1 expedition:1 elaborated:1 trusting:1 ni:32 accuracy:4 convolutional:8 variance:3 largely:4 publicly:1 sqr:1 yield:1 identify:1 maier:1 produced:2 carlo:2 j6:5 published:1 detector:1 suffers:1 aligns:1 definition:2 against:2 energy:1 resultant:1 cabinet:1 di:19 hereby:1 propagated:2 dataset:6 recall:1 color:3 knowledge:1 infers:1 improves:5 segmentation:9 cj:2 wjd:3 back:14 higher:3 xie:1 follow:1 planar:72 editing:4 formulation:2 evaluated:3 box:6 though:1 furthermore:1 eqn:9 mistakenly:1 su:1 propagation:7 lack:1 brings:1 quality:6 aj:2 artifact:1 jayasumana:1 normalized:3 contain:1 ccf:1 regularization:20 ltb:2 fidler:1 satisfactory:1 semantic:12 illustrated:1 adjacent:1 during:12 encourages:3 davis:1 noted:1 cosine:2 criterion:6 trying:1 bansal:1 crf:1 demonstrate:1 bring:2 fj:1 reasoning:1 image:42 wise:22 regularizing:1 novel:1 recently:1 fi:2 common:1 empirically:1 overview:2 cohen:1 volume:2 belong:3 he:1 significant:2 refer:2 silicon:1 gibbs:1 ai:9 consistency:8 grid:1 i6:1 similarly:3 dj:6 dot:1 calibration:1 stable:1 similarity:1 surface:75 cortex:1 gt:2 align:1 base:1 recent:3 optimizing:1 belongs:1 certain:1 binary:1 furukawa:2 preserving:2 seen:2 additional:3 dai:1 eo:9 determine:3 ii:2 multiple:2 full:1 alan:1 smooth:1 match:1 long:3 lin:2 post:4 award:1 adobe:1 prediction:46 basic:1 n1p:1 regression:1 vision:3 metric:12 arxiv:2 iteration:5 represent:1 kernel:1 normalization:2 deeplab:1 achieved:1 addition:6 fine:5 adhere:1 median:2 december:1 flow:2 incorporates:1 effectiveness:1 extracting:1 near:2 leverage:3 yang:1 feedforward:1 split:1 xj:2 fit:1 zi:1 architecture:2 idea:1 vgg:11 grad:2 angeles:1 fleet:1 whether:6 motivated:1 passed:3 effort:1 stereo:4 song:1 henb:3 passing:1 cause:1 deep:7 detailed:2 eigenvectors:3 tune:1 induces:1 category:1 generate:2 http:1 zj:2 nsf:2 notice:2 holistically:1 estimated:2 per:2 bryan:1 discrete:1 pollefeys:3 four:10 shih:1 nevertheless:1 demonstrating:2 acos:1 prevent:1 pj:5 graph:1 enforced:2 uncertainty:1 reasonable:3 reader:1 dropout:5 layer:3 bound:1 followed:3 convergent:1 refine:3 yielded:2 oracle:4 adapted:1 constraint:5 orthogonality:1 constrain:1 ladicky:1 scene:6 min:1 performing:2 optical:2 structured:1 belonging:1 across:2 slightly:2 reconstructing:1 wi:6 ikehata:1 hl:3 den:1 iccv:8 dv:1 equation:2 agree:3 visualization:1 hern:1 discus:1 monocular:1 fail:2 serf:1 adopted:1 available:2 observe:2 hierarchical:1 v2:5 enforce:1 barron:2 eigen:18 original:2 top:1 denotes:1 include:1 maintaining:2 log10:1 pushing:1 build:2 especially:1 forum:1 society:1 silberman:2 objective:1 malik:2 traditional:2 gradient:7 affinity:15 win:4 distance:2 iclr:2 arbelaez:1 separating:1 argue:1 extent:1 urtasun:1 reason:2 length:1 setup:1 mostly:1 pajdla:2 stated:1 design:1 implementation:5 perform:3 upper:6 nvi:2 dcrf:54 nix:1 datasets:1 benchmark:1 descent:1 hinton:1 incorporated:1 inferred:6 namely:1 pair:1 toolbox:1 optimized:2 puhrsch:1 california:1 nti:1 barcelona:1 nip:3 address:1 able:2 poole:1 usually:1 scott:1 indoor:4 including:1 difficulty:1 rely:1 regularized:4 predicting:1 improve:4 library:1 extract:1 literature:1 l2:5 understanding:1 relative:2 manhattan:1 loss:5 fully:5 highlight:1 discriminatively:1 limitation:1 filtering:3 var:5 validation:1 shelhamer:2 integrate:1 degree:4 consistent:2 xiao:2 principle:1 editor:1 pi:10 row:1 eccv:5 compatible:1 supported:1 last:1 hebert:1 fall:1 neighbor:1 sparse:1 benefit:1 van:1 boundary:8 depth:86 overcome:1 evaluating:1 world:4 valid:1 rich:2 contour:1 transition:2 forward:2 made:2 commonly:2 projected:1 simplified:1 hengel:1 kzi:1 nonlocal:1 reconstructed:2 keep:1 global:1 sequentially:1 overfitting:1 fergus:6 xi:4 continuous:1 search:1 table:2 promising:1 channel:1 defocus:1 du:1 official:1 dense:9 rgbd:2 x1:1 fig:6 wiley:1 embeds:1 comprises:1 lie:1 donahue:1 baek:1 jt:15 showing:2 nyu:9 offset:1 list:1 gupta:3 intractable:1 incorporating:1 rel:2 adding:1 corr:1 kr:3 ci:3 illustrates:1 chen:1 depicted:6 appearance:2 explore:1 army:1 visual:4 ncut:4 nested:2 truth:17 extracted:2 conditional:5 identity:1 formulated:4 marked:2 rbf:1 towards:1 room:1 price:1 hard:3 permutohedral:1 specifically:8 torr:1 schwing:2 indicating:2 formally:3 select:1 support:2 guo:1 arises:1 evaluate:5 trainable:1 srivastava:1 |
6,084 | 6,503 | A Locally Adaptive Normal Distribution
Georgios Arvanitidis, Lars Kai Hansen and S?ren Hauberg
Technical University of Denmark, Lyngby, Denmark
DTU Compute, Section for Cognitive Systems
{gear,lkai,sohau}@dtu.dk
Abstract
The multivariate normal density is a monotonic function of the distance to the mean,
and its ellipsoidal shape is due to the underlying Euclidean metric. We suggest to
replace this metric with a locally adaptive, smoothly changing (Riemannian) metric
that favors regions of high local density. The resulting locally adaptive normal
distribution (LAND) is a generalization of the normal distribution to the ?manifold?
setting, where data is assumed to lie near a potentially low-dimensional manifold
embedded in RD . The LAND is parametric, depending only on a mean and a
covariance, and is the maximum entropy distribution under the given metric. The
underlying metric is, however, non-parametric. We develop a maximum likelihood
algorithm to infer the distribution parameters that relies on a combination of
gradient descent and Monte Carlo integration. We further extend the LAND to
mixture models, and provide the corresponding EM algorithm. We demonstrate
the efficiency of the LAND to fit non-trivial probability distributions over both
synthetic data, and EEG measurements of human sleep.
1
Introduction
The multivariate normal distribution is a fundamental building block in many machine learning
algorithms, and its well-known density can compactly be written as
1
2
(1)
p(x | ?, ?) ? exp ? dist? (?, x) ,
2
where dist2? (?, x) denotes the Mahalanobis distance for covariance matrix ?. This distance measure
corresponds to the length of the straight line connecting ? and x, and consequently the normal
distribution is often used to model linear phenomena. When data lies near a nonlinear manifold
embedded in RD the normal distribution becomes inadequate due to its linear metric. We investigate
if a useful distribution can be constructed by replacing the linear distance function with a nonlinear
counterpart. This is similar in spirit to Isomap [21] that famously replace the linear distance with a
geodesic distance measured over a neighborhood graph spanned by the data, thereby allowing for
a nonlinear model. This is, however, a discrete distance measure that is only well-defined over the
training data. For a generative model, we need a continuously defined metric over the entire RD .
Following Hauberg et al. [9] we learn a smoothly changing metric that favors regions of high density
i.e., geodesics tend to move near the data. Under this metric, the data space is interpreted as a
D-dimensional Riemannian manifold. This ?manifold learning? does not change dimensionality, but
merely provides a local description of the data. The Riemannian view-point, however, gives a strong
mathematical foundation upon which the proposed distribution can be developed. Our work, thus,
bridges work on statistics on Riemannian manifolds [15, 23] with manifold learning [21].
We develop a locally adaptive normal distribution (LAND) as follows: First, we construct a metric
that captures the nonlinear structure of the data and enables us to compute geodesics; from this, an
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Geodesics
LAND model
Data
LAND mean
Linear
Geodesic
Linear model
LAND mean
Linear mean
Figure 1: Illustration of the LAND using MNIST images of the digit 1 projected onto the first 2
principal components. Left: comparison of the geodesic and the linear distance. Center: the proposed
locally adaptive normal distribution. Right: the Euclidean normal distribution.
unnormalized density is trivially defined. Second, we propose a scalable Monte Carlo integration
scheme for normalizing the density with respect to the measure induced by the metric. Third, we
develop a gradient-based algorithm for maximum likelihood estimation on the learned manifold. We
further consider a mixture of LANDs and provide the corresponding EM algorithm. The usefulness
of the model is verified on both synthetic data and EEG measurements of human sleep stages.
Notation: all points x ? RD are considered as column vectors, and they are denoted with bold
D
lowercase characters. S++
represents the set of symmetric D ? D positive definite matrices. The
learned Riemannian manifold is denoted M, and its tangent space at x ? M is denoted Tx M.
2
A Brief Summary of Riemannian Geometry
We start our exposition with a brief review of Riemannian manifolds [6]. These smooth manifolds are
naturally equipped with a distance measure, and are commonly used to model physical phenomena
such as dynamical or periodic systems, and many problems that have a smooth behavior.
D
Definition 1. A smooth manifold M together with a Riemannian metric M : M ? S++
is called
a Riemannian manifold. The Riemannian metric M encodes a smoothly changing inner product
hu, M(x)vi on the tangent space u, v ? Tx M of each point x ? M.
Remark 1. The Riemannian metric M(x) acts on tangent vectors, and may, thus, be interpreted as
a standard Mahalanobis metric restricted to an infinitesimal region around x.
The local inner product based on M is a suitable model for capturing local behavior of data, i.e.
manifold learning. From the inner product, we can define geodesics as length-minimizing curves
connecting two points x, y ? M, i.e.
Z 1p
? = argmin
?
h? 0 (t), M(?(t))? 0 (t)idt, s.t. ?(0) = x, ?(1) = y.
(2)
?
0
Here M(?(t)) is the metric tensor at ?(t), and the tangent vector ? 0 denotes the derivative (velocity) of ?. The distance between x and y is defined as the length of the
geodesic. A standard result from differential geometry is that the geodesic can be found
as the solution to a system of 2nd order ordinary differential equations (ODEs) [6, 9]:
|
1
?vec[M(?(t))]
? (t) = ? M?1 (?(t))
(? 0 (t) ? ? 0 (t))
2
??(t)
(3)
v = Logx (y)
?(t)
00
x
subject to ?(0) = x, ?(1) = y. Here vec[?] stacks the columns
of a matrix into a vector and ? is the Kronecker product.
y = Expx (v)
This differential equation allows us to define basic operations on
the manifold. The exponential map at a point x takes a tangent
vector v ? Tx M to y = Expx (v) ? M such that the curve Figure 2: An illustration of the ex?(t) = Expx (t ? v) is a geodesic originating at x with initial ponential and logarithmic maps.
2
velocity v and length kvk. The inverse mapping, which takes y to Tx M is known as the logarithm
map and is denoted Logx (y). By definition kLogx (y)k corresponds to the geodesic distance from
x to y. These operations are illustrated in Fig. 2. The exponential and the logarithmic map can
be computed by solving Eq. 3 numerically, as an initial value problem (IVP) or a boundary value
problem (BVP) respectively. In practice the IVPs are substantially faster to compute than the BVPs.
The Mahalanobis distance is naturally extended to Riemannian manifolds as dist2? (x, y) =
hLogx (y), ??1 Logx (y)i. From this, Pennec [15] considered the Riemannian normal distribution
1
1
?1
pM (x | ?, ?) = exp ? hLog? (x), ? Log? (x)i , x ? M
(4)
C
2
and showed that it is the manifold-valued distribution with maximum entropy subject to a known
mean and covariance. This distribution is an instance of Eq. 1 and is the distribution we consider in
this paper. Next, we consider standard ?intrinsic least squares? estimates of ? and ?.
2.1
Intrinsic Least Squares Estimators
Let the data be generated from an unknown probability distribution qM (x) on a manifold. Then it is
common [15] to define the intrinsic mean of the distribution as the point that minimize the variance
Z
?
? = argmin
dist2 (?, x)qM (x)dM(x),
(5)
??M
M
where dM(x) is the measure (or infinitesimal volume element) induced by the metric. Based on the
mean, a covariance matrix can be defined
Z
? =
?
Log?? (x)Log?? (x)| qM (x)dM(x),
(6)
?
D(?)
? is the domain over which T?
where D(?)
? M is well-defined. For the manifolds we consider, the
? rely on gradient-based optimization to find a local
? is RD . Practical estimators of ?
domain D(?)
minimizer of Eq. 5, which is well-defined [12]. For finite data {xn }N
n=1 , the descent direction is
P
? = N
proportional to v
and
the
updated
mean
is a point on the geodesic
Log
(x
)
?
T
M,
n
?
?
n=1
? ). After estimating the mean, the empirical covariance matrix is estimated
curve ?(t) = Exp? (t ? v
? = 1 PN Log (xn )Log (xn )| . It is worth noting that even though these estimators are
as ?
?
?
?
?
n=1
N ?1
natural, they are not maximum likelihood estimates for the Riemannian normal distribution (4).
In practice, the intrinsic mean often falls in regions of low data density [8]. For instance, consider
data distributed uniformly on the equator of a sphere, then the optima of Eq. 5 is either of the poles.
Consequently, the empirical covariance is often overestimated.
3
A Locally Adaptive Normal Distribution
We now have the tools to define a locally adaptive normal distribution (LAND): we replace the
linear Euclidean distance with a locally adaptive Riemannian distance and study the corresponding
Riemannian normal distribution (4). By learning a Riemannian manifold and using its structure to
estimate distributions of the data, we provide a new and useful link between Riemannian statistics
and manifold learning.
3.1
Constructing a Metric
In the context of manifold learning, Hauberg et al. [9] suggest to model the local behavior of the data
manifold via a locally-defined Riemannian metric. Here we propose to use a local covariance matrix
to represent the local structure of the data. We only consider diagonal covariances for computational
efficiency and to prevent the overfitting. The locality of the covariance is defined via an isotropic
Gaussian kernel of size ?. Thus, the metric tensor at x ? M is defined as the inverse of a local
diagonal covariance matrix with entries
!?1
!
N
2
X
kxn ? xk2
2
Mdd (x) =
wn (x)(xnd ? xd ) + ?
, with wn (x) = exp ?
. (7)
2? 2
n=1
3
Here xnd is the dth dimension of the nth observation, and ? a regularization parameter to avoid
singular covariances. This defines a smoothly changing (hence Riemannian) metric that captures the
local structure of the data. It is easy to see that if x is outside of the support of the data, then the
metric tensor is large. Thus, geodesics are ?pulled? towards the data where the metric is small. Note
that the proposed metric is not invariant to linear transformations.While we restrict our attention to
this particular choice, other learned metrics are equally applicable, c.f. [22, 9].
3.2
Estimating the Normalization Constant
The normalization constant of Eq. 4 is by definition
Z
1
C(?, ?) =
exp ? hLog? (x), ??1 Log? (x)i dM(x),
2
M
(8)
where dM(x) denotes the measure induced by the Riemannian metric. The constant C(?, ?) depends
not only on the covariance matrix, but also on the mean of the distribution, and the curvature of the
manifold (captured by the logarithm map). For a general learned manifold, C(?, ?) is inaccessible in
closed-form and we resort to numerical techniques. We start by rewriting Eq. 8 as
Z
q
M(Exp (v)) exp ? 1 hv, ??1 vi dv.
C(?, ?) =
(9)
?
2
T? M
In effect, we integrate the distribution over the tangent space T? M instead of directly over the
manifold. This transformation relies on the fact that the volume of an infinitely small area on
the manifold can be computed in the tangent space if we take the deformation of the metric into
account
[15]. This deformation is captured by the measure which, in the tangent space,
q
q is dM(x) =
M(Exp (v))dv. For notational simplicity we define the function m(?, v) = M(Exp (v)),
?
?
which intuitively captures the cost for a point to be outside the data support (m is large in low density
areas and small where the density is high).
We estimate the normalization constant (9) using Monte Carlo integration. We first multiply and divide the integral with the
p normalization constant of the Euclidean normal distribution Z = (2?)D |?|.
Then, the integral becomes an expectation estimation problem
C(?, ?) = Z ? EN (0,?) [m(?, v)], which can be estimated numerically as
C(?, ?) '
S
ZX
m(?, vs ),
S s=1
where vs ? N (0, ?)
(10)
Intrinsic
Least
Squares
LAND
Figure 3: Comparison of
and S is the number of samples on T? M. The computationally LAND and intrinsic least
expensive element is to evaluate m, which in turn requires evaluating squares means.
Exp? (v). This amounts to solving an IVP numerically, which is
fairly fast. Had we performed the integration directly on the manifold (8) we would have had to
evaluate the logarithm map, which is a much more expensive BVP. The tangent space integration,
thus, scales better.
3.3
Inferring Parameters
Assuming an independent and identically distributed dataset {xn }N
n=1 , we can write their joint
QN
distribution as pM (x1 , . . . , xN ) = n=1 pM (xn | ?, ?). We find parameters ? and ? by maximum
likelihood, which we implement by minimizing the mean negative log-likelihood
N
1 X
?
? ?} = argmin ? (?, ?) = argmin
{?,
hLog? (xn ), ??1 Log? (xn )i + log (C(?, ?)) .
??M
??M 2N n=1
D
??S++
D
??S++
(11)
D
S++
The first term of the objective function ? : M ?
is a data-fitting term, while the second can be
seen as a force that both pulls the mean closer to the high density areas and shrinks the covariance.
Specifically, when the mean is in low density areas, as well as when the covariance gives significant
4
probability to those areas, the value of m(?, v)
Algorithm 1 LAND maximum likelihood
will by construction be large. Consequently,
C(?, ?) will increase and these solutions will be Input: the data {xn }N
n=1 , stepsizes ?? , ?A
penalized. In practice, we find that the maximum Output: the estimated ?,
? C(
?
? ?,
? ?,
? ?)
likelihood LAND mean generally avoids low den- 1: initialize ?0 , ?0 and t ? 0
sity regions, which is in contrast to the standard
2: repeat
intrinsic least squares mean (5), see Fig. 3.
3:
estimate C(?t , ?t ) using Eq. 10
compute d? ?(?t , ?t ) using Eq. 12
In practice we optimize ? using block coordinate 4:
descent: we optimize the mean keeping the co- 5:
?t+1 ? Exp?t (?? d? ?(?t , ?t ))
variance fixed and vice versa. Unfortunately, both 6:
estimate C(?t+1 , ?t ) using Eq. 10
of the sub-problems are non-convex, and unlike
7:
compute ?A ?(?t+1 , ?t ) using Eq. 13
the linear normal distribution, they lack a closedAt+1 ? At ? ?A ?A ?(?t+1 , ?t )
form solution. Since the logarithm map is a dif- 8:
9:
?t+1 ? [(At+1 )| At+1 ]?1
ferentiable function, we can use gradient-based
t ?
t + 1
techniques to infer ? and ?. Below we give the 10:
t+1
t 2
descent direction for ? and ? and the correspond- 11: until
?(?t+1 , ? ) ? ?(?t , ? )
2 ?
ing optimization scheme is given in Algorithm 1.
Initialization is discussed in the supplements.
Optimizing ?: the objective function is differentiable with respect to ? [6], and using that
?1
?
Log? (x)i = ?2??1 Log? (x), we get the gradient
?? hLog? (x), ?
#
"
N
S
X
Z
1 X
?1
Log? (xn ) ?
m(?, vs )vs .
(12)
?? ?(?, ?) = ??
N n=1
C(?, ?) ? S s=1
It is easy to see that this gradient is highly dependent on the condition number of ?. We find that this,
at times, makes the gradient unstable, and choose to use the steepest descent direction instead of the
gradient direction. This is equal to d? ?(?, ?) = ???? ?(?, ?) (see supplements).
D
Optimizing ?: since the covariance matrix by definition is constrained to be in the space S++
,a
?1
|
common trick is to decompose the matrix as ? = A A, and optimize the objective with respect
to A. The gradient of this factor is (see supplements for derivation)
"
#
N
S
X
Z
1 X
|
|
Log? (xn )Log? (xn ) ?
m(?, vs )vs vs .
(13)
?A ?(?, ?) = A
N n=1
C(?, ?) ? S s=1
Here the first term fits the given data by increasing the size of the covariance matrix, while the second
term regularizes the covariance towards a small matrix.
3.4
Mixture of LANDs
At this point we can find maximum likelihood estimates of the LAND model. We can easily extend
this to mixtures of LANDs: Following the derivation of the standard Gaussian mixture model [3], our
objective function for inferring the parameters of the LAND mixture model is formulated as follows
K X
N
X
1
?1
hLog?k (xn ), ?k Log?k (xn )i + log(C(?k , ?k )) ? log(?k ) , (14)
?(?) =
rnk
2
n=1
k=1
? p (x | ?k ,?k )
PKk M n
where ? = {?k , ?k }K
is the probability that xn is generated by the
k=1 , rnk =
l=1 ?l pM (xn | ?l ,?l )
P
K
k th component, and k=1 ?k = 1, ?k ? 0. The corresponding EM algorithm is in the supplements.
4
Experiments
In this section we present both synthetic and real experiments to demonstrate the advantages of the
LAND. We compare our model with both the Gaussian mixture model (GMM), and a mixture of
LANDs using least squares (LS) estimators (5, 6). Since the latter are not maximum likelihood
estimates we use a Riemannian K-means algorithm to find cluster centers. In all experiments we
use S = 3000 samples in the Monte Carlo integration. This choice is investigated empirically in the
supplements. Furthermore, we choose ? as small as possible, while ensuring that the manifold is
smooth enough that geodesics can be computed numerically.
5
4.1
Synthetic Data Experiments
Mean negative log-likelihood
7
As a first experiment, we generate a nonlinear data-manifold
GMM
LS
by sampling from a mixture of 20 Gaussians positioned along a
6
LAND
True
half-ellipsoidal curve (see left panel of Fig. 5). We generate 10
5
datasets with 300 points each, and fit for each dataset the three
4
models with K = 1, . . . , 4 number of components. Then, we
3
generate 10000 samples from each fitted model, and we compute the mean negative log-likelihood of the true generative
2
distribution using these samples. Fig. 4 shows that the LAND
1
learns faster the underlying true distribution, than the GMM.
0
Moreover, the LAND perform better than the least squares esti1
2
3
4
Number of mixture components
mators, which overestimates the covariance. In the supplements
we show, using the standard AIC and BIC criteria, that the op- Figure 4: The mean negative logtimal LAND is achieved for K = 1, while for the least squares likelihood experiment.
estimators and the GMM, the optimal is achieved for K = 3
and K = 4 respectively.
In addition, in Fig. 5 we show the contours for the LAND and the GMM for K = 2. There, we
can observe that indeed, the LAND adapts locally to the data and reveals their underlying nonlinear
structure. This is particularly evident near the ?boundaries? of the data-manifold.
Geodesics
Gaussian mixture model
LAND mixture model
LAND mean
GMM mean
Data
LAND means
Geodesics, cluster 1
Geodesics, cluster 2
Figure 5: Synthetic data and the fitted models. Left: the given data, the intensity of the geodesics
represent the responsibility of the point to the corresponding cluster. Center: the contours of the
LAND mixture model. Right: the contours of the Gaussian mixture model.
We extend this experiment to a clustering task (see left panel of Fig. 6 for data). The center and right
panels of Fig. 6 show the contours of the LAND and Gaussian mixtures, and it is evident that the
LAND is substantially better at capturing non-ellipsoidal clusters. Due to space limitations, we move
further illustrative experiments to the supplementary material and continue with real data.
4.2
Modeling Sleep Stages
We consider electro-encephalography (EEG) measurements of human sleep from 10 subjects, part of
the PhysioNet database [11, 7, 5]. For each subject we get EEG measurements during sleep from
two electrodes on the front and the back of the head, respectively. Measurements are sampled at
fs = 100Hz, and for each 30 second window a so-called sleep stage label is assigned from the set
{1, 2, 3, 4, REM, awake}. Rapid eye movement (REM) sleep is particularly interesting, characterized
by having EEG patterns similar to the awake state but with a complex physiological pattern, involving
e.g., reduced muscle tone, rolling eye movements and erection [16]. Recent evidence points to the
importance of REM sleep for memory consolidation [4]. Periods in which the sleeper is awake are
typically happening in or near REM intervals. Thus we here consider the characterization of sleep in
terms of three categories REM, awake, and non-REM, the latter a merger of sleep stages 1 ? 4.
We extract features from EEG measurements as follows: for each subject we subdivide the 30 second
windows to 10 seconds, and apply a short-time-Fourier-transform to the EEG signal of the frontal
electrode with 50% overlapping windows. From this we compute the log magnitude of the spectrum
log(1 + |f |) of each window. The resulting data matrix is decomposed using Non-Negative Matrix
Factorization (10 random starts) into five factors, and we use the coefficients as 5D features. In Fig. 7
we illustrate the nonlinear manifold structure based on a three factor analysis.
6
Geodesics
LAND mixture model
Gaussian mixture model
LAND mean
GMM mean
Data
LAND means
Geodesics, cluster 1
Geodesics, cluster 2
Geodesics
LAND mixture model
Gaussian mixture model
LAND mean
GMM mean
Data
LAND means
Geodesics, cluster 1
Geodesics, cluster 2
Figure 6: The clustering problem for two synthetic datasets. Left: the given data, the intensity of the
geodesics represent the responsibility of the point to the corresponding cluster. Center: the LAND
mixture model. Right: the Gaussian mixture model.
We perform clustering on the data and evaluate the alignment
between cluster labels and sleep stages using the F-measure
[14]. The LAND depends on the parameter ? to construct the
1-4
R.E.M.
metric tensor, and in this experiment it is less straightforward to
awake
select ? because of significant intersubject variability. First, we
fixed ? = 1 for all the subjects. From the results in Table 1 we
observe that for ? = 1 the LAND(1) generally outperforms the
GMM and achieves much better alignment. To further illustrate
the effect of ? we fitted a LAND for ? = [0.5, 0.6, . . . , 1.5]
and present the best result achieved by the LAND. Selecting ?
this way leads indeed to higher degrees of alignment further unFigure 7: The 3 leading factors for
derlining that the conspicuous manifold structure and the rather
subject ?s151?.
compact sleep stage distributions in Fig. 7 are both captured
better with the LAND representation than with a linear GMM.
Table 1: The F-measure result for 10 subjects (the closer to 1 the better).
5
s001
s011
s042
s062
s081
s141
s151
s161
s162
s191
LAND(1)
GMM
0.831
0.812
0.701
0.690
0.670
0.675
0.740
0.651
0.804
0.798
0.870
0.870
0.820
0.794
0.780
0.775
0.747
0.747
0.786
0.776
LAND
0.831
0.716
0.695
0.740
0.818
0.874
0.830
0.783
0.750
0.787
Related Work
We are not the first to consider Riemannian normal distributions, e.g. Pennec [15] gives a theoretical
analysis of the distribution, and Zhang and Fletcher [23] consider the Riemannian counterpart of
probabilistic PCA. Both consider the scenario where the manifold is known a priori. We adapt the
distribution to the ?manifold learning? setting by constructing a Riemannian metric that adapts to the
data. This is our overarching contribution.
Traditionally, manifold learning is seen as an embedding problem where a low-dimensional representation of the data is sought. This is useful for visualization [21, 17, 18, 1], clustering [13],
semi-supervised learning [2] and more. However, in embedding approaches, the relation between a
7
new point and the embedded points are less well-defined, and consequently these approaches are less
suited for building generative models. In contrast, the Riemannian approach gives the ability to measure continuous geodesics that follow the structure of the data. This makes the learned Riemannian
manifold a suitable space for a generative model.
Simo-Serra et al. [19] consider mixtures of Riemannian normal distributions on manifolds that
are known a priori. Structurally, their EM algorithm is similar to ours, but they do not account
for the normalization constants for different mixture components. Consequently, their approach is
inconsistent with the probabilistic formulation. Straub et al. [20] consider data on spherical manifolds,
and further consider a Dirichlet process prior for determining the number of components. Such a
prior could also be incorporated in our model. The key difference to our work is that we consider
learned manifolds as well as the following complications.
6
Discussion
In this paper we have introduced a parametric locally adaptive normal distribution. The idea is to
replace the Euclidean distance in the ordinary normal distribution with a locally adaptive nonlinear
distance measure. In principle, we learn a non-parametric metric space, by constructing a smoothly
changing metric that induces a Riemannian manifold, where we build our model. As such, we propose
a parametric model over a non-parametric space.
The non-parametric space is constructed using a local metric that is the inverse of a local covariance
matrix. Here locality is defined via a Gaussian kernel, such that the manifold learning can be seen
as a form of kernel smoothing. This indicates that our scheme for learning a manifold might not
scale to high-dimensional input spaces. In these cases it may be more practical to learn the manifold
probabilistically [22] or as a mixture of metrics [9]. This is feasible as the LAND estimation procedure
is agnostic to the details of the learned manifold as long as exponential and logarithm maps can be
evaluated.
Once a manifold is learned, the LAND is simply a Riemannian normal distribution. This is a natural
model, but more intriguing, it is a theoretical interesting model since it is the maximum entropy
distribution for a fixed mean and covariance [15]. It is generally difficult to build locally adaptive
distributions with maximum entropy properties, yet the LAND does this in a fairly straight-forward
manner. This is, however, only a partial truth as the distribution depends on the non-parametric space.
The natural question, to which we currently do not have an answer, is whether a suitable maximum
entropy manifold exist?
Algorithmically, we have proposed a maximum likelihood estimation scheme for the LAND. This
combines a gradient-based optimization with a scalable Monte Carlo integration method. Once
exponential and logarithm maps are available, this procedure is surprisingly simple to implement. We
have demonstrated the algorithm on both real and synthetic data and results are encouraging. We
almost always improve upon a standard Gaussian mixture model as the LAND is better at capturing
the local properties of the data.
We note that both the manifold learning aspect and the algorithmic aspect of our work can be improved.
It would be of great value to learn the parameter ? used for smoothing the Riemannian metric, and in
general, more adaptive learning schemes are of interest. Computationally, the bottleneck of our work
is evaluating the logarithm maps. This may be improved by specialized solvers, e.g. probabilistic
solvers [10], or manifold-specific heuristics.
The ordinary normal distribution is a key element in many machine learning algorithms. We expect
that many fundamental generative models can be extended to the ?manifold? setting simply by
replacing the normal distribution with a LAND. Examples of this idea include Na?ve Bayes, Linear
Discriminant Analysis, Principal Component Analysis and more. Finally we note that standard
hypothesis tests also extend to Riemannian normal distributions [15] and hence also to the LAND.
Acknowledgements. LKH was funded in part by the Novo Nordisk Foundation Interdisciplinary
Synergy Program 2014, ?Biophysically adjusted state-informed cortex stimulation (BASICS)?. SH
was funded in part by the Danish Council for Independent Research, Natural Sciences.
8
References
[1] M. Belkin and P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation.
Neural Computation, 15(6):1373?1396, June 2003.
[2] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold Regularization: A Geometric Framework for Learning
from Labeled and Unlabeled Examples. JMLR, 7:2399?2434, Dec. 2006.
[3] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2006.
[4] R. Boyce, S. D. Glasgow, S. Williams, and A. Adamantidis. Causal evidence for the role of REM sleep
theta rhythm in contextual memory consolidation. Science, 352(6287):812?816, 2016.
[5] A. Delorme and S. Makeig. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics
including independent component analysis. J. Neurosci. Methods, page 21, 2004.
[6] M. do Carmo. Riemannian Geometry. Mathematics (Boston, Mass.). Birkh?user, 1992.
[7] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B.
Moody, C.-K. Peng, and H. E. Stanley. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New
Research Resource for Complex Physiologic Signals. Circulation, 101(23):e215?e220, 2000 (June 13).
[8] S. Hauberg. Principal Curves on Riemannian Manifolds. IEEE Transactions on Pattern Analysis and
Machine Intelligence (TPAMI), 2016.
[9] S. Hauberg, O. Freifeld, and M. J. Black. A Geometric Take on Metric Learning. In Advances in Neural
Information Processing Systems (NIPS) 25, pages 2033?2041, 2012.
[10] P. Hennig and S. Hauberg. Probabilistic Solutions to Differential Equations and their Application to
Riemannian Statistics. In Proceedings of the 17th international Conference on Artificial Intelligence and
Statistics (AISTATS), volume 33, 2014.
[11] S. A. Imtiaz and E. Rodriguez-Villegas. An open-source toolbox for standardized use of PhysioNet Sleep
EDF Expanded Database. In 2015 37th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society (EMBC), pages 6014?6017, Aug 2015.
[12] H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied
Mathematics, 30(5):509?541, 1977.
[13] U. Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4):395?416, Dec. 2007.
[14] R. Marxer, H. Purwins, and A. Hazan. An f-measure for evaluation of unsupervised clustering with
non-determined number of clusters. Report of the EmCAP project (European Commission FP6-IST), pages
1?3, 2008.
[15] X. Pennec. Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements.
Journal of Mathematical Imaging and Vision, 25(1):127?154, July 2006.
[16] D. Purves, G. Augustine, D. Fitzpatrick, W. Hall, A. LaMantia, J. McNamara, and L. White. Neuroscience,
2008. De Boeck, Sinauer, Sunderland, Mass.
[17] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290:2323?2326, 2000.
[18] B. Sch?lkopf, A. Smola, and K.-R. M?ller. Kernel principal component analysis. In Advances in Kernel
Methods - Support Vector Learning, pages 327?352, 1999.
[19] E. Simo-Serra, C. Torras, and F. Moreno-Noguer. Geodesic Finite Mixture Models. In Proceedings of the
British Machine Vision Conference. BMVA Press, 2014.
[20] J. Straub, J. Chang, O. Freifeld, and J. W. Fisher III. A Dirichlet Process Mixture Model for Spherical
Data. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2015.
[21] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A Global Geometric Framework for Nonlinear
Dimensionality Reduction. Science, 290(5500):2319, 2000.
[22] A. Tosi, S. Hauberg, A. Vellido, and N. D. Lawrence. Metrics for Probabilistic Geometries. In The
Conference on Uncertainty in Artificial Intelligence (UAI), July 2014.
[23] M. Zhang and P. Fletcher. Probabilistic Principal Geodesic Analysis. In Advances in Neural Information
Processing Systems (NIPS) 26, pages 1178?1186, 2013.
9
| 6503 |@word trial:1 e215:1 nd:1 open:2 hu:1 covariance:20 thereby:1 reduction:3 initial:2 selecting:1 ours:1 outperforms:1 contextual:1 goldberger:1 yet:1 intriguing:1 written:1 numerical:1 shape:1 enables:1 moreno:1 v:7 generative:5 half:1 intelligence:4 tone:1 gear:1 isotropic:1 merger:1 steepest:1 short:1 provides:1 characterization:1 complication:1 zhang:2 five:1 mathematical:2 along:1 constructed:2 differential:4 fitting:1 combine:1 manner:1 peng:1 indeed:2 rapid:1 behavior:3 dist:1 rem:7 decomposed:1 spherical:2 encouraging:1 ivanov:1 equipped:1 window:4 increasing:1 becomes:2 spain:1 estimating:2 underlying:4 notation:1 panel:3 moreover:1 agnostic:1 mass:3 straub:2 project:1 argmin:4 interpreted:2 substantially:2 developed:1 informed:1 transformation:2 nj:1 act:1 xd:1 makeig:1 qm:3 overestimate:1 positive:1 engineering:1 local:13 lamantia:1 solver:2 might:1 black:1 initialization:1 co:1 dif:1 factorization:1 practical:2 practice:4 block:2 definite:1 implement:2 digit:1 procedure:2 area:5 empirical:2 suggest:2 get:2 onto:1 unlabeled:1 context:1 optimize:3 map:10 demonstrated:1 center:6 straightforward:1 attention:1 overarching:1 l:2 convex:1 williams:1 simplicity:1 glasgow:1 pure:1 estimator:5 spanned:1 pull:1 embedding:3 coordinate:1 traditionally:1 updated:1 construction:1 user:1 hypothesis:1 trick:1 velocity:2 element:3 expensive:2 particularly:2 recognition:1 physiotoolkit:1 xnd:2 database:2 labeled:1 role:1 capture:3 hv:1 region:5 movement:2 inaccessible:1 dynamic:1 geodesic:28 solving:2 upon:2 efficiency:2 compactly:1 easily:1 joint:1 tx:4 derivation:2 fast:1 monte:5 birkh:1 artificial:3 neighborhood:1 outside:2 heuristic:1 kai:1 valued:1 supplementary:1 novo:1 favor:2 statistic:8 ability:1 niyogi:2 transform:1 advantage:1 differentiable:1 tpami:1 propose:3 product:4 adapts:2 roweis:1 description:1 secaucus:1 mollifier:1 dist2:3 cluster:12 optimum:1 electrode:2 sity:1 depending:1 develop:3 illustrate:2 measured:1 op:1 intersubject:1 aug:1 eq:10 strong:1 physiobank:1 direction:4 lars:1 human:3 material:1 villegas:1 generalization:1 decompose:1 adjusted:1 around:1 considered:2 hall:1 normal:25 exp:11 great:1 fletcher:2 mapping:1 algorithmic:1 lawrence:1 fitzpatrick:1 achieves:1 sought:1 xk2:1 estimation:4 applicable:1 label:2 currently:1 hansen:1 bridge:1 council:1 vice:1 tool:2 ivp:3 gaussian:11 always:1 rather:1 pn:1 avoid:1 stepsizes:1 probabilistically:1 june:2 notational:1 lkh:1 likelihood:13 indicates:1 contrast:2 hauberg:7 glass:1 dependent:1 lowercase:1 entire:1 typically:1 sunderland:1 relation:1 originating:1 denoted:4 priori:2 smoothing:3 constrained:1 integration:7 fairly:2 initialize:1 equal:1 construct:2 once:2 having:1 sampling:1 biology:1 represents:1 unsupervised:1 report:1 idt:1 logx:3 belkin:2 ve:1 geometry:4 interest:1 investigate:1 highly:1 multiply:1 evaluation:1 alignment:3 mixture:27 sh:1 kvk:1 integral:2 closer:2 partial:1 simo:2 euclidean:5 logarithm:7 divide:1 causal:1 deformation:2 theoretical:2 fitted:3 instance:2 column:2 modeling:1 karcher:1 tosi:1 ordinary:3 cost:1 pole:1 entry:1 rolling:1 mcnamara:1 usefulness:1 eigenmaps:1 inadequate:1 front:1 commission:1 answer:1 periodic:1 synthetic:7 density:11 fundamental:2 international:3 overestimated:1 interdisciplinary:1 probabilistic:6 connecting:2 continuously:1 together:1 moody:1 na:1 choose:2 cognitive:1 resort:1 derivative:1 leading:1 account:2 de:2 bold:1 coefficient:1 inc:1 vi:2 depends:3 performed:1 view:1 closed:1 responsibility:2 hazan:1 start:3 bayes:1 purves:1 encephalography:1 contribution:1 minimize:1 square:8 circulation:1 variance:2 correspond:1 lkopf:1 biophysically:1 ren:1 carlo:5 worth:1 zx:1 straight:2 danish:1 definition:4 infinitesimal:2 dm:6 naturally:2 riemannian:37 sampled:1 dataset:2 dimensionality:4 stanley:1 positioned:1 back:1 higher:1 supervised:1 follow:1 improved:2 formulation:1 evaluated:1 though:1 shrink:1 furthermore:1 stage:6 smola:1 until:1 langford:1 replacing:2 nonlinear:10 overlapping:1 lack:1 rodriguez:1 defines:1 building:2 effect:2 usa:1 true:3 isomap:1 counterpart:2 hausdorff:1 regularization:2 hence:2 kxn:1 assigned:1 symmetric:1 s191:1 illustrated:1 white:1 mahalanobis:3 during:1 illustrative:1 rhythm:1 unnormalized:1 criterion:1 evident:2 demonstrate:2 silva:1 image:1 common:2 specialized:1 stimulation:1 physical:1 empirically:1 volume:3 extend:4 discussed:1 numerically:4 measurement:7 significant:2 versa:1 vec:2 rd:5 trivially:1 pm:4 mathematics:2 had:2 funded:2 e220:1 cortex:1 curvature:1 multivariate:2 showed:1 recent:1 optimizing:2 scenario:1 pennec:3 carmo:1 continue:1 muscle:1 captured:3 seen:3 period:1 ller:1 signal:2 semi:1 july:2 torras:1 infer:2 smooth:4 technical:1 faster:2 ing:1 characterized:1 adapt:1 sphere:1 long:1 equally:1 laplacian:1 ensuring:1 scalable:2 basic:3 involving:1 vision:2 metric:36 expectation:1 represent:3 kernel:5 normalization:5 achieved:3 equator:1 dec:2 addition:1 ode:1 interval:1 singular:1 source:2 sch:1 unlike:1 induced:3 tend:1 subject:8 electro:1 hz:1 inconsistent:1 spirit:1 near:5 noting:1 iii:1 easy:2 wn:2 identically:1 enough:1 fit:3 bic:1 restrict:1 inner:3 idea:2 bottleneck:1 whether:1 pca:1 f:1 york:1 remark:1 useful:3 generally:3 physiologic:1 amount:1 ellipsoidal:3 locally:14 tenenbaum:1 induces:1 category:1 reduced:1 generate:3 exist:1 tutorial:1 estimated:3 algorithmically:1 neuroscience:1 mators:1 discrete:1 write:1 hennig:1 ist:1 key:2 changing:5 prevent:1 gmm:11 rewriting:1 verified:1 imaging:1 graph:1 merely:1 fp6:1 luxburg:1 inverse:3 uncertainty:1 almost:1 capturing:3 rnk:2 pkk:1 aic:1 sleep:14 annual:1 kronecker:1 awake:5 encodes:1 fourier:1 aspect:2 expanded:1 combination:1 em:4 character:1 conspicuous:1 dv:2 restricted:1 invariant:1 intuitively:1 den:1 lyngby:1 equation:3 computationally:2 visualization:1 resource:1 turn:1 s151:2 available:1 operation:2 gaussians:1 apply:1 observe:2 noguer:1 spectral:1 subdivide:1 denotes:3 clustering:6 dirichlet:2 include:1 standardized:1 medicine:1 build:2 physionet:3 society:1 tensor:4 move:2 objective:4 question:1 parametric:8 diagonal:2 gradient:10 distance:16 link:1 manifold:53 unstable:1 trivial:1 discriminant:1 denmark:2 assuming:1 length:4 illustration:2 minimizing:2 difficult:1 unfortunately:1 hlog:5 potentially:1 negative:5 unknown:1 perform:2 allowing:1 embc:1 observation:1 datasets:2 finite:2 descent:5 regularizes:1 extended:2 variability:1 head:1 incorporated:1 communication:1 stack:1 intensity:2 introduced:1 toolbox:2 mdd:1 delorme:1 learned:8 barcelona:1 nip:3 dth:1 dynamical:1 below:1 pattern:4 nordisk:1 program:1 including:1 memory:2 suitable:3 natural:4 rely:1 force:1 nth:1 scheme:5 improve:1 brief:2 eye:2 theta:1 dtu:2 augustine:1 extract:1 review:1 prior:2 acknowledgement:1 tangent:9 geometric:4 determining:1 georgios:1 sinauer:1 embedded:3 expect:1 interesting:2 limitation:1 proportional:1 foundation:2 integrate:1 degree:1 freifeld:2 boeck:1 ponential:1 principle:1 edf:1 famously:1 land:54 expx:3 summary:1 penalized:1 repeat:1 consolidation:2 keeping:1 surprisingly:1 pulled:1 fall:1 saul:1 serra:2 distributed:2 curve:5 boundary:2 xn:16 dimension:1 evaluating:2 avoids:1 qn:1 contour:4 forward:1 commonly:1 adaptive:12 projected:1 amaral:1 transaction:1 compact:1 synergy:1 global:1 overfitting:1 reveals:1 uai:1 assumed:1 spectrum:1 continuous:1 table:2 learn:4 eeg:8 investigated:1 complex:2 european:1 constructing:3 domain:2 aistats:2 neurosci:1 x1:1 fig:9 en:1 mietus:1 sub:1 inferring:2 structurally:1 exponential:4 lie:2 jmlr:1 third:1 learns:1 british:1 specific:1 bishop:1 dk:1 physiological:1 normalizing:1 evidence:2 intrinsic:8 mnist:1 importance:1 supplement:6 magnitude:1 boston:1 locality:2 smoothly:5 entropy:5 logarithmic:2 suited:1 simply:2 infinitely:1 happening:1 sindhwani:1 monotonic:1 chang:1 corresponds:2 minimizer:1 truth:1 relies:2 formulated:1 consequently:5 exposition:1 bvp:2 towards:2 replace:4 fisher:1 feasible:1 change:1 springerverlag:1 eeglab:1 specifically:1 determined:1 uniformly:1 principal:5 called:2 select:1 support:3 mark:1 latter:2 frontal:1 evaluate:3 phenomenon:2 ex:1 |
6,085 | 6,504 | Learning Structured Sparsity in Deep Neural
Networks
Wei Wen
University of Pittsburgh
wew57@pitt.edu
Chunpeng Wu
University of Pittsburgh
chw127@pitt.edu
Yiran Chen
University of Pittsburgh
yic52@pitt.edu
Yandan Wang
University of Pittsburgh
yaw46@pitt.edu
Hai Li
University of Pittsburgh
hal66@pitt.edu
Abstract
High demand for computation resources severely hinders deployment of large-scale
Deep Neural Networks (DNN) in resource constrained devices. In this work, we
propose a Structured Sparsity Learning (SSL) method to regularize the structures
(i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn
a compact structure from a bigger DNN to reduce computation cost; (2) obtain a
hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN?s
evaluation. Experimental results show that SSL achieves on average 5.1? and
3.1? speedups of convolutional layer computation of AlexNet against CPU and
GPU, respectively, with off-the-shelf libraries. These speedups are about twice
speedups of non-structured sparsity; (3) regularize the DNN structure to improve
classification accuracy. The results show that for CIFAR-10, regularization on
layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while
improves the accuracy from 91.25% to 92.60%, which is still higher than that of
original ResNet with 32 layers. For AlexNet, SSL reduces the error by ? 1%.
1
Introduction
Deep neural networks (DNN), especially deep Convolutional Neural Networks (CNN), made remarkable success in visual tasks [1][2][3][4][5] by leveraging large-scale networks learning from a
huge volume of data. Deployment of such big models, however, is computation-intensive. To reduce
computation, many studies are performed to compress the scale of DNN, including sparsity regularization [6], connection pruning [7][8] and low rank approximation [9][10][11][12][13]. Sparsity
regularization and connection pruning, however, often produce non-structured random connectivity
and thus, irregular memory access that adversely impacts practical acceleration in hardware platforms.
Figure 1 depicts practical layer-wise speedup of AlexNet, which is non-structurally sparsified by
`1 -norm. Compared to original model, the accuracy loss of the sparsified model is controlled within
2%. Because of the poor data locality associated with the scattered weight distribution, the achieved
speedups are either very limited or negative even the actual sparsity is high, say, >95%. We define
sparsity as the ratio of zeros in this paper. In recently proposed low rank approximation approaches,
the DNN is trained first and then each trained weight tensor is decomposed and approximated by a
product of smaller factors. Finally, fine-tuning is performed to restore the model accuracy. Low rank
approximation is able to achieve practical speedups because it coordinates model parameters in dense
matrixes and avoids the locality problem of non-structured sparsity regularization. However, low
rank approximation can only obtain the compact structure within each layer, and the structures of the
layers are fixed during fine-tuning such that costly reiterations of decomposing and fine-tuning are
required to find an optimal weight approximation for performance speedup and accuracy retaining.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1
Sparsity
Speedup
1.5
1
0.5
0
Quadro K600
Tesla K40c
GTX Titan
Sparsity
0
conv1
conv2
conv3
conv4
conv5
Figure 1: Evaluation speedups of AlexNet on GPU platforms and the sparsity. conv1 refers to
convolutional layer 1, and so forth. Baseline is profiled by GEMM of cuBLAS. The sparse matrixes
are stored in the format of Compressed Sparse Row (CSR) and accelerated by cuSPARSE.
Inspired by the facts that (1) there is redundancy across filters and channels [11]; (2) shapes of
filters are usually fixed as cuboid but enabling arbitrary shapes can potentially eliminate unnecessary
computation imposed by this fixation; and (3) depth of the network is critical for classification
but deeper layers cannot always guarantee a lower error because of the exploding gradients and
degradation problem [5], we propose Structured Sparsity Learning (SSL) method to directly learn
a compressed structure of deep CNNs by group Lasso regularization during the training. SSL is a
generic regularization to adaptively adjust multiple structures in DNN, including structures of filters,
channels, filter shapes within each layer, and structure of depth beyond the layers. SSL combines
structure regularization (on DNN for classification accuracy) with locality optimization (on memory
access for computation efficiency), offering not only well-regularized big models with improved
accuracy but greatly accelerated computation (e.g., 5.1? on CPU and 3.1? on GPU for AlexNet).
Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn.
2
Related works
Connection pruning and weight sparsifying. Han et al. [7][8] reduced parameters of AlexNet and
VGG-16 using connection pruning. Since most reduction is achieved on fully-connected layers,
no practical speedups of convolutional layers are observed for the similar issue shown in Figure 1.
However, convolution is more costly and many new DNNs use fewer fully-connected layers, e.g., only
3.99% parameters of ResNet-152 [5] are from fully-connected layers, compression and acceleration
on convolutional layers become essential. Liu et al. [6] achieved >90% sparsity of convolutional
layers in AlexNet with 2% accuracy loss, and bypassed the issue of Figure 1 by hardcoding the sparse
weights into program. In this work, we also focus on convolutional layers. Compared to the previous
techniques, our method coordinates sparse weights in adjacent memory space and achieve higher
speedups. Note that hardware and program optimizations based on our method can further boost the
system performance which is not covered in this paper due to space limit.
Low rank approximation. Denil et al. [9] predicted 95% parameters in a DNN by exploiting the
redundancy across filters and channels. Inspired by it, Jaderberg et al. [11] achieved 4.5? speedup
on CPUs for scene text character recognition and Denton et al. [10] achieved 2? speedups for the
first two layers in a larger DNN. Both of the works used Low Rank Approximation (LRA) with ?1%
accuracy drop. [13][12] improved and extended LRA to larger DNNs. However, the network structure
compressed by LRA is fixed; reiterations of decomposing, training/fine-tuning, and cross-validating
are still needed to find an optimal structure for accuracy and speed trade-off. As the number of
hyper-parameters in LRA method increases linearly with the layer depth [10][13], the search space
increases linearly or even exponentially. Comparing to LRA, our contributions are: (1) SSL can
dynamically optimize the compactness of DNNs with only one hyper-parameter and no reiterations;
(2) besides the redundancy within the layers, SSL also exploits the necessity of deep layers and
reduce them; (3) DNN filters regularized by SSL have lower rank approximation, so it can work
together with LRA for more efficient model compression.
Model structure learning. Group Lasso [14] is an efficient regularization to learn sparse structures.
Liu et al. [6] utilized group Lasso to constrain the structure scale of LRA. To adapt DNN structure to
different databases, Feng et al. [16] learned the appropriate number of filters in DNN. Different from
prior arts, we apply group Lasso to regularize multiple DNN structures (filters, channels, filter shapes,
and layer depth). A most related parallel work is Group-wise Brain Damage [17], which is a subset
(i.e., learning filter shapes) of our work and further justifies the effectiveness of our techniques.
2
(1)
(l)
W:,c
l ,:,:
(2)
(l)
W:,cl ,ml ,kl
(3)
W (l)
(4)
shortcut
?
?
channel-wise
Wn(l)l ,:,:,:
?
?
(2)
?
?
?
?
(1)
(2)
(l)
W:,cl ,ml ,kl
(3)
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
W (l)
(l)
filter-wise
(1)
(l)
W:,c
l ,:,:
(1)
:,cl ,:,:
shape-wise
Wn(l)l ,:,:,:
Wn(l)l ,:,:,:
?
?
Wn(l)l ,:,:,:
W:,cl ,ml ,kl
(3)
W (l)
(4)
depth-wise W
(l)
(4)
Figure 2: The proposed Structured Sparsity Learning
(SSL) for DNNs. The weights in filters are
(2)
split into multiple groups. Through
group
Lasso
regularization,
a more compact DNN is obtained
(l)
W l ,ml ,kl
(3)
by removing some groups. :,cThe
figure illustrates the filter-wise, channel-wise, shape-wise, and
(l)
(4)in the work.
depth-wise structured sparsityWthat are explored
(l)
W:,c
l ,:,:
3
Structured Sparsity Learning Method for DNNs
We focus mainly on the Structured Sparsity Learning (SSL) on convolutional layers to regularize the
structure of DNNs. We first propose a generic method to regularize structures of DNN in Section 3.1,
and then specify the method to structures of filters, channels, filter shapes and depth in Section 3.2.
Variants of formulations are also discussed from computational efficiency viewpoint in Section 3.3.
1
3.1
Proposed structured sparsity learning for generic structures
Suppose the weights of convolutional layers in a DNN form a sequence of 4-D tensors
W (l) ? RNl ?Cl ?Ml ?Kl , where Nl , Cl , Ml and Kl are the dimensions of the l-th (1 ? l ? L)
weight tensor along the axes of filter, channel, spatial height and spatial width, respectively. L denotes
the number of convolutional layers. Then the proposed generic optimization target of a DNN with
structured sparsity regularization can be formulated as:
1
1
1
E(W ) = ED (W ) + ? ? R(W ) + ?g ?
L
X
Rg W (l) .
(1)
l=1
Here W represents the collection of all weights in the DNN; ED (W ) is the loss on data; R(?) is
non-structured regularization applying on every weight, e.g., `2 -norm; and Rg (?) is the structured
sparsity regularization on each layer. Because group Lasso can effectively zero out all weights in
some groups [14][15], we adopt it in our SSL. The regularization of group Lasso on a set of weights
PG
w can be represented as Rg (w) = g=1 ||w(g) ||g , where w(g) is a group of partial weights in w
and G is the r
total number of groups. Different groups may overlap. Here || ? ||g is the group Lasso, or
P|w(g) | (g) 2
wi
||w(g) ||g =
, where |w(g) | is the number of weights in w(g) .
i=1
3.2
Structured sparsity learning for structures of filters, channels, filter shapes and depth
In SSL, the learned ?structure? is decided by the way of splitting groups of w(g) . We investigate and
formulate the filer-wise, channel-wise, shape-wise, and depth-wise structured sparsity in Figure 2.
For simplicity, the R(?) term of Eq. (1) is omitted in the following formulation expressions.
(l)
(l)
Penalizing unimportant filers and channels. Suppose Wnl ,:,:,: is the nl -th filter and W:,cl ,:,: is the
cl -th channel of all filters in the l-th layer. The optimization target of learning the filter-wise and
channel-wise structured sparsity can be defined as
E(W ) = ED (W ) + ?n ?
L
X
?
Nl
X
?
l=1
nl =1
?
||Wn(l)
|| ?
l ,:,:,: g
+ ?c ?
L
X
?
Cl
X
?
l=1
cl =1
?
(l)
||W:,c
|| ? .
l ,:,: g
(2)
As indicated in Eq. (2), our approach tends to remove less important filters and channels. Note
that zeroing out a filter in the l-th layer results in a dummy zero output feature map, which in turn
makes a corresponding channel in the (l + 1)-th layer useless. Hence, we combine the filter-wise and
channel-wise structured sparsity in the learning simultaneously.
3
(l)
Learning arbitrary shapes of filers. As illustrated in Figure 2, W:,cl ,ml ,kl denotes the vector of
all corresponding weights located at spatial position of (ml , kl ) in the 2D filters across the cl -th
(l)
channel. Thus, we define W:,cl ,ml ,kl as the shape fiber related to learning arbitrary filter shape
because a homogeneous non-cubic filter shape can be learned by zeroing out some shape fibers. The
optimization target of learning shapes of filers becomes:
E(W ) = ED (W ) + ?s ?
L
X
?
Cl
Ml
Kl
X
X
X
?
l=1
cl =1 ml =1 kl =1
?
(l)
||W:,cl ,ml ,kl ||g ? .
(3)
Regularizing layer depth. We also explore the depth-wise sparsity to regularize the depth of DNNs
in order to improve accuracy and reduce computation cost. The corresponding optimization target is
PL
E(W ) = ED (W ) + ?d ? l=1 ||W (l) ||g . Different from other discussed sparsification techniques,
zeroing out all the filters in a layer will cut off the message propagation in the DNN so that the output
neurons cannot perform any classification. Inspired by the structure of highway networks [18] and
deep residual networks [5], we propose to leverage the shortcuts across layers to solve this issue. As
illustrated in Figure 2, even when SSL removes an entire unimportant layers, feature maps will still
be forwarded through the shortcut.
3.3
Structured sparsity learning for computationally efficient structures
All proposed schemes in section 3.2 can learn a compact DNN for computation cost reduction.
Moreover, some variants of the formulations of these schemes can directly learn structures that can
be efficiently computed.
2D-filter-wise sparsity for convolution. 3D convolution in DNNs essentially is a composition of 2D
convolutions. To perform efficient convolution, we explored a fine-grain variant of filter-wise sparsity,
(l)
namely, 2D-filter-wise sparsity, to spatially enforce group Lasso on each 2D filter of Wnl ,cl ,:,: . The
saved convolution is proportional to the percentage of the removed 2D filters. The fine-grain version of
filter-wise sparsity can more efficiently reduce the computation associated with convolution: Because
the distance of weights (in a smaller group) from the origin is shorter, which makes group Lasso
more easily to obtain a higher ratio of zero groups.
Combination of filter-wise and shape-wise sparsity for GEMM. Convolutional computation in
DNNs is commonly converted to modality of GEneral Matrix Multiplication (GEMM) by lowering
(l)
weight tensors and feature tensors to matrices [19]. For example, in Caffe [20], a 3D filter Wnl ,:,:,: is
(l)
reshaped to a row in the weight matrix where each column is the collection of weights W:,cl ,ml ,kl
related to shape-wise sparsity. Combining filter-wise and shape-wise sparsity can directly reduce the
dimension of weight matrix in GEMM by removing zero rows and columns. In this context, we use
row-wise and column-wise sparsity as the interchangeable terminology of filter-wise and shape-wise
sparsity, respectively.
4
Experiments
We evaluate the effectiveness of our SSL using published models on three databases ? MNIST,
CIFAR-10, and ImageNet. Without explicit explanation, SSL starts with the network whose weights
are initialized by the baseline, and speedups are measured in matrix-matrix multiplication by Caffe in
a single-thread Intel Xeon E5-2630 CPU. Hyper-parameters are selected by cross-validation.
4.1
LeNet and multilayer perceptron on MNIST
In the experiment of MNIST, we examine the effectiveness of SSL in two types of networks:
LeNet [21] implemented by Caffe and a multilayer perceptron (MLP) network. Both networks were
trained without data augmentation.
LeNet: When applying SSL to LeNet, we constrain the network with filter-wise and channel-wise
sparsity in convolutional layers to penalize unimportant filters and channels. Table 1 summarizes
the remained filters and channels, floating-point operations (FLOP), and practical speedups. In the
table, LeNet 1 is the baseline and the others are the results after applying SSL in different strengths
4
Table 1: Results after penalizing unimportant filters and channels in LeNet
LeNet #
Error
Filter # ?
1 (baseline) 0.9%
20?50
2
0.8%
5?19
3
1.0%
3?12
?
In the order of conv1?conv2
Channel # ?
FLOP ?
Speedup ?
1?20
1?4
1?3
100%?100%
25%?7.6%
15%?3.6%
1.00??1.00?
1.64??5.23?
1.99??7.44?
Table 2: Results after learning filter shapes in LeNet
LeNet #
Error
Filter size ?
Channel #
FLOP
Speedup
1 (baseline)
0.9%
25?500
1?20
100%?100%
1.00??1.00?
4
0.8%
21?41
1?2
8.4%?8.2%
2.33??6.93?
?
5
1.0%
7?14
1?1
1.4%?2.8%
5.19??10.82?
The sizes of filters after removing zero shape fibers, in the order of conv1?conv2
of structured sparsity regularization. The results show that our method achieves the similar error
(?0.1%) with much fewer filters and channels, and saves significant FLOP and computation time.
To demonstrate the impact of SSL on the structures of filters, we present all learned conv1 filters
in Figure 3. It can be seen that most filters in LeNet 2 are entirely zeroed out except for five most
important detectors of stroke patterns that are sufficient for feature extraction. The accuracy of
LeNet 3 (that further removes the weakest and redundant stroke detector) drops only 0.2% from that
of LeNet 2. Compared to the random and blurry filter patterns in LeNet 1 which are resulted from the
high freedom of parameter space, the filters in LeNet 2 & 3 are regularized and converge to smoother
and more natural patterns. This explains why our proposed SSL obtains the same-level accuracy but
has much less filters. The smoothness of the filters are also observed in the deeper layers.
The effectiveness of the shape-wise sparsity on LeNet is summarized in Table 2. The baseline LeNet 1
has conv1 filters with a regular 5 ? 5 square (size = 25) while LeNet 5 reduces the dimension that
can be constrained by a 2 ? 4 rectangle (size = 7). The 3D shape of conv2 filters in the baseline is
also regularized to the 2D shape in LeNet 5 within only one channel, indicating that only one filter in
conv1 is needed. This fact significantly saves FLOP and computation time.
Figure 3: Learned conv1 filters in LeNet 1 (top), LeNet 2 (middle) and LeNet 3 (bottom)
MLP: Besides convolutional layers, our proposed SSL can be extended to learn the structure (i.e.,
the number of neurons) of fully-connected layers. We enforce the group Lasso regularization on
all the input (or output) connections of each neuron. A neuron whose input connections are all
zeroed out can degenerate to a bias neuron in the next layer; similarly, a neuron can degenerate to a
removable dummy neuron if all of its output connections are zeroed out. Figure 4(a) summarizes
the learned structure and FLOP of different MLP networks. The results show that SSL can not only
remove hidden neurons but also discover the sparsity of images. For example, Figure 4(b) depicts the
number of connections of each input neuron in MLP 2, where 40.18% of input neurons have zero
connections and they concentrate at the boundary of the image. Such a distribution is consistent with
our intuition: handwriting digits are usually written in the center and pixels close to the boundary
contain little discriminative classification information.
4.2
ConvNet and ResNet on CIFAR-10
We implemented the ConvNet of [1] and deep residual networks (ResNet) [5] on CIFAR-10. When
regularizing filters, channels, and filter shapes, the results and observations of both networks are
similar to that of the MNIST experiment. Moreover, we simultaneously learn the filter-wise and
shape-wise sparsity to reduce the dimension of weight matrix in GEMM by ConvNet. We also learn
the depth-wise sparsity of ResNet to regularize the depth of the DNNs.
5
189
190
191
192
193
the group Lasso regularization on all the input (or output) connections of every neuron, including
those of the input layer. Note that a neuron with all the input connections zeroed out degenerate
to a bias neuron in the next layer; similarly, a neuron degenerates to a removable dummy neuron
if all of its output connections are zeroed out. As such, the computation of GEneral Matrix Vector
(GEMV) product in fully-connected layers can be significantly reduced. Table 3 summarizes the
Table 3: Learning the number of neurons in multi-layer perceptron
MLP #
Error
Neuron # per layer ?
FLOP per layer ?
1 (baseline) 1.43% 784?500?300?10
100%?100%?100%
2
1.34% 469?294?166?10
35.18%?32.54%?55.33%
3
1.53% 434?174?78?10
19.26%?9.05%?26.00%
?
In the order of input layer?hidden layer 1?hidden layer 2?output layer
291
1
28
1
(a)
28
0
(b)
Figure 4: (a) Results of learning the number of neurons in MLP. (b) the connection numbers of input
6
neurons (i.e., pixels) in MLP 2 after SSL.
Table 3: Learning row-wise and column-wise sparsity of ConvNet on CIFAR-10
ConvNet #
Error
Row sparsity ?
1 (baseline) 17.9% 12.5%?0%?0%
2
17.9% 50.0%?28.1%?1.6%
3
16.9% 31.3%?0%?1.6%
?
in the order of conv1?conv2?conv3
Column sparsity ?
Speedup ?
0%?0%?0%
0%?59.3%?35.1%
0%?42.8%?9.8%
1.00??1.00??1.00?
1.43??3.05??1.57?
1.25??2.01??1.18?
ConvNet: We use the network from Alex Krizhevsky et al. [1] as the baseline and implement it
using Caffe. All the configurations remain the same as the original implementation except that we
added a dropout layer with a ratio of 0.5 in the fully-connected layer to avoid over-fitting. ConvNet is
trained without data augmentation. Table 3 summarizes the results of three ConvNet networks. Here,
the row/column sparsity of a weight matrix is defined as the percentage of all-zero rows/columns.
Figure 5 shows their learned conv1 filters. In Table 3, SSL can reduce the size of weight matrix
in ConvNet 2 by 50%, 70.7% and 36.1% for each convolutional layer and achieve good speedups
without accuracy drop. Surprisingly, without SSL, four conv1 filters of the baseline are actually
all-zeros as shown in Figure 5, demonstrating the great potential of filter sparsity. When SSL is
applied, half of conv1 filters in ConvNet 2 can be zeroed out without accuracy drop.
On the other hand, in ConvNet 3, SSL lowers 1.0% (?0.16%) error with a model even smaller than
the baseline. In this scenario, SSL performs as a structure regularization to dynamically learn a better
network structure (including the number of filters and filer shapes) to reduce the error.
ResNet: To investigate the necessary depth of DNNs by SSL, we use a 20-layer deep residual networks
(ResNet-20) [5] as the baseline. The network has 19 convolutional layers and 1 fully-connected
layer. Identity shortcuts are utilized to connect the feature maps with the same dimension while 1?1
convolutional layers are chosen as shortcuts between the feature maps with different dimensions.
Batch normalization [22] is adopted after convolution and before activation. We use the same data
augmentation and training hyper-parameters as that in [5]. The final error of baseline is 8.82%. In
SSL, the depth of ResNet-20 is regularized by depth-wise sparsity. Group Lasso regularization is
only enforced on the convolutional layers between each pair of shortcut endpoints, excluding the first
convolutional layer and all convolutional shortcuts. After SSL converges, layers with all zero weights
are removed and the net is finally fine-tuned with a base learning rate of 0.01, which is lower than
that (i.e., 0.1) in the baseline.
Figure 6 plots the trend of the error vs. the number of layers under different strengths of depth
regularizations. Compared with original ResNet in [5], SSL learns a ResNet with 14 layers (SSLResNet-14) reaching a lower error than that of the baseline with 20 layers (ResNet-20); SSL-ResNet-18
and ResNet-32 achieve an error of 7.40% and 7.51%, respectively. This result implies that SSL can
work as a depth regularization to improve classification accuracy. Note that SSL can efficiently learn
shallower DNNs without accuracy loss to reduce computation cost; however, it does not mean the
depth of the network is not important. The trend in Figure 6 shows that the test error generally
declines as more layers are preserved. A slight error rise of SSL-ResNet-20 from SSL-ResNet-18
shows the suboptimal selection of the depth in the group of ?32?32?.
Figure 5: Learned conv1 filters in ConvNet 1 (top), ConvNet 2 (middle) and ConvNet 3 (bottom)
6
ResNet?20
% error
SSL
8
7
ResNet?20
SSL
ResNet?32
# conv layers
% error
10
9
8
7
12
14
16
SSL?ResNet?#
18
20
ResNet?32
9
12
20
18
16
14
12
10
8
6
4
2
0
14
32?32
12
16
SSL?ResNet?#
16?16
14
18
20
18
20
8?8
16
SSL?ResNet?#
# conv layers
20
Figure
6:32?32
Error 16?16
vs. layer
number after depth regularization. # is the number of layers including
18
8?8
the16
last
fully-connected
layer.
ResNet-# is the ResNet in [5]. SSL-ResNet-# is the depth-regularized
14
12
ResNet
by
SSL.
32?32
indicates
the convolutional layers with an output map size of 32?32, etc.
10
8
6
4.3 42
0
AlexNet on ImageNet
12
14
16
18
20
SSL?ResNet?#
To show the generalization
of our method to large scale DNNs, we evaluate SSL using AlexNet with
ILSVRC 2012. CaffeNet [20], the replication of AlexNet [1] with mirror changes, is used in our
experiment. All training images are rescaled to the size of 256?256. A 227?227 image is randomly
cropped from each scaled image and mirrored for data augmentation and only the center crop is
used for validation. The final top-1 validation error is 42.63%. In SSL, AlexNet is first trained with
structure regularization; when it converges, zero groups are removed to obtain a DNN with the new
structure; finally, the network is fine-tuned without SSL to regain the accuracy.
We first study 2D-filter-wise and shape-wise sparsity by exploring the trade-offs between computation
complexity and classification accuracy. Figure 7(a) shows the 2D-filter sparsity (the ratio between the
removed 2D filters and total 2D filters) and the saved FLOP of 2D convolutions vs. the validation
error. In Figure 7(a), deeper layers generally have higher sparsity as the group size shrinks and the
number of 2D filters grows. 2D-filter sparsity regularization can reduce the total FLOP by 30%?40%
without accuracy loss or reduce the error of AlexNet by ?1% down to 41.69% by retaining the original
number of parameters. Shape-wise sparsity also obtains similar results. In Table 4, for example,
AlexNet 5 achieves on average 1.4? layer-wise speedup on both CPU and GPU without accuracy loss
after shape regularization; The top-1 error can also be reduced down to 41.83% if the parameters are
retained. In Figure 7(a), the obtained DNN with the lowest error has a very low sparsity, indicating
that the number of parameters in a DNN is still important to maintain learning capacity. In this case,
SSL works as a regularization to add restriction of smoothness to the model in order to avoid overfitting. Figure 7(b) compares the results of dimensionality reduction of weight tensors in the baseline
and our SSL-regularized AlexNet. The results show that the smoothness restriction enforces parameter
searching in lower-dimensional space and enables lower rank approximation of the DNNs. Therefore,
SSL can work together with low rank approximation to achieve even higher model compression.
80
60
40
100
conv1
conv2
conv3
conv4
conv5
FLOP
80
60
40
20
0
41.5
20
0
42
42.5
43
% top-1 error
(a)
43.5
44
% Reconstruction error
% FLOP reduction
% Sparsity
100
6
50
conv1
conv2
conv3
conv4
conv5
40
30
20
4
l1
SSL
3
2
10
0
0
5
speedup
Besides the above analyses, the computation efficiencies of structured sparsity and non-structured
sparsity are compared in Caffe using standard off-the-shelf libraries, i.e., Intel Math Kernel Library
1
50
% dimensionality
(b)
100 0
Quadro Tesla
Titan
Black
Xeon
T8
Xeon
T4
Xeon
T2
Xeon
T1
(c)
Figure 7: (a) 2D-filter-wise sparsity and FLOP reduction vs. top-1 error. Vertical dash line shows the
error of original AlexNet; (b) The reconstruction error of weight tensor vs. dimensionality. Principal
Component Analysis (PCA) is utilized to perform dimensionality reduction. The eigenvectors
corresponding to the largest eigenvalues are selected as basis of lower-dimensional space. Dash lines
denote the results of the baselines and solid lines indicate the ones of the AlexNet 5 in Table 4; (c)
Speedups of `1 -norm and SSL on various CPUs and GPUs (In labels of x-axis, T# is the number of
maximum physical threads in CPUs). AlexNet 1 and AlexNet 2 in Table 4 are used as testbenches.
7
on CPU and CUDA cuBLAS and cuSPARSE on GPU. We use SSL to learn a AlexNet with high
column-wise and row-wise sparsity as the representative of structured sparsity method. `1 -norm is
selected as the representative of non-structured sparsity method instead of connection pruning [7]
because `1 -norm get a higher sparsity on convolutional layers as the results of AlexNet 3 and AlexNet
4 depicted in Table 4. Speedups achieved by SSL are measured by GEMM, where all-zero rows (and
columns) in each weight matrix are removed and the remaining ones are concatenated in consecutive
memory space. Note that compared to GEMM, the overhead of concatenation can be ignored. To
measure the speedups of `1 -norm, sparse weight matrices are stored in the format of Compressed
Sparse Row (CSR) and computed by sparse-dense matrix multiplication subroutines.
Table 4 compares the obtained sparsity and speedups of `1 -norm and SSL on CPU (Intel Xeon) and
GPU (GeForce GTX TITAN Black) under approximately the same errors, e.g., with acceptable or no
accuracy loss. To make a fair comparison, after `1 -norm regularization, the DNN is also fine-tuned
by disconnecting all zero-weighted connections so that, e.g., 1.39% accuracy is recovered for the
AlexNet 1. Our experiments show that the DNNs require a very high non-structured sparsity to achieve
a reasonable speedup (the speedups are even negative when the sparsity is low). SSL, however, can
always achieve positive speedups. With an acceptable accuracy loss, our SSL achieves on average
5.1? and 3.1? layer-wise acceleration on CPU and GPU, respectively. Instead, `1 -norm achieves
on average only 3.0? and 0.9? layer-wise acceleration on CPU and GPU, respectively. We note
that, at the same accuracy, our average speedup is indeed higher than that of [6] which adopts heavy
hardware customization to overcome the negative impact of non-structured sparsity. Figure 7(c)
shows the speedups of `1 -norm and SSL on various platforms, including both GPU (Quadro, Tesla
and Titan) and CPU (Intel Xeon E5-2630). SSL can achieve on average ? 3? speedup on GPU while
non-structured sparsity obtain no speedup on GPU platforms. On CPU platforms, both methods can
achieve good speedups and the benefit grows as the processors become weaker. Nonetheless, SSL
can always achieve averagely ? 2? speedup compared to non-structured sparsity.
5
Conclusion
In this work, we propose a Structured Sparsity Learning (SSL) method to regularize filter, channel,
filter shape, and depth structures in Deep Neural Networks (DNN). Our method can enforce the DNN
to dynamically learn more compact structures without accuracy loss. The structured compactness
of the DNN achieves significant speedups for the DNN evaluation both on CPU and GPU with
off-the-shelf libraries. Moreover, a variant of SSL can be performed as structure regularization to
improve classification accuracy of state-of-the-art DNNs.
Acknowledgments
This work was supported in part by NSF XPS-1337198 and NSF CCF-1615475. The authors thank
Drs. Sheng Li and Jongsoo Park for valuable feedback on this work.
Table 4: Sparsity and speedup of AlexNet on ILSVRC 2012
#
1
Method
`1
Top1 err.
Statistics
conv1
conv2
conv3
conv4
conv5
44.67%
sparsity
CPU ?
GPU ?
67.6%
0.80
0.25
92.4%
2.91
0.52
97.2%
4.84
1.38
96.6%
3.83
1.04
94.3%
2.76
1.36
column sparsity
row sparsity
CPU ?
GPU ?
0.0%
9.4%
1.05
1.00
63.2%
12.9%
3.37
2.37
76.9%
40.6%
6.27
4.94
84.7%
46.9%
9.73
4.03
80.7%
0.0%
4.93
3.05
sparsity
16.0%
62.0%
65.0%
63.0%
63.0%
sparsity
CPU ?
GPU ?
14.7%
0.34
0.08
76.2%
0.99
0.17
85.3%
1.30
0.42
81.5%
1.10
0.30
76.3%
0.93
0.32
0.00%
1.00
1.00
20.9%
1.27
1.25
39.7%
1.64
1.63
39.7%
1.68
1.72
24.6%
1.32
1.36
2
SSL
44.66%
3
pruning [7]
42.80%
4
`1
42.51%
5
SSL
42.53%
column sparsity
CPU ?
GPU ?
8
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems, pages 1097?1105. 2012.
[2] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2014.
[3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[4] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint
arXiv:1409.4842, 2015.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[6] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional
neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[7] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient
neural network. In Advances in Neural Information Processing Systems, pages 1135?1143. 2015.
[8] Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with
pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[9] Misha Denil, Babak Shakibi, Laurent Dinh, Marc' Aurelio Ranzato, and Nando de Freitas. Predicting
parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148?2156.
2013.
[10] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure
within convolutional networks for efficient evaluation. In Advances in Neural Information Processing
Systems, pages 1269?1277. 2014.
[11] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with
low rank expansions. arXiv preprint arXiv:1405.3866, 2014.
[12] Yani Ioannou, Duncan P. Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training
cnns with low-rank filters for efficient image classification. arXiv preprint arXiv:1511.06744, 2015.
[13] Cheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-rank
regularization. arXiv preprint arXiv:1511.06067, 2015.
[14] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of
the Royal Statistical Society. Series B (Statistical Methodology), 68(1):49?67, 2006.
[15] Seyoung Kim and Eric P Xing. Tree-guided group lasso for multi-task regression with structured sparsity.
In Proceedings of the 27th International Conference on Machine Learning, 2010.
[16] Jiashi Feng and Trevor Darrell. Learning the structure of deep convolutional networks. In The IEEE
International Conference on Computer Vision (ICCV), 2015.
[17] Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
[18] Rupesh Kumar Srivastava, Klaus Greff, and J?rgen Schmidhuber. Highway networks. arXiv preprint
arXiv:1505.00387, 2015.
[19] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and
Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014.
[20] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv
preprint arXiv:1408.5093, 2014.
[21] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[22] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
9
| 6504 |@word cnn:1 middle:2 version:1 compression:4 norm:10 averagely:1 pg:1 solid:1 reduction:6 necessity:1 liu:4 configuration:1 series:1 offering:1 tuned:3 document:1 freitas:1 err:1 recovered:1 com:1 comparing:1 guadarrama:1 gemm:7 activation:1 written:1 gpu:16 john:2 grain:2 shape:33 enables:1 christian:2 remove:4 drop:4 plot:1 v:5 half:1 fewer:2 device:1 cthe:1 selected:3 math:1 zhang:1 five:1 height:1 along:1 become:2 rnl:1 replication:1 yuan:1 fixation:1 combine:2 fitting:1 overhead:1 indeed:1 andrea:1 examine:1 quadro:3 multi:2 yiran:1 brain:2 inspired:3 ming:1 decomposed:1 cpu:18 actual:1 little:1 becomes:1 spain:1 discover:1 moreover:3 conv:2 alexnet:23 lowest:1 sparsification:1 guarantee:1 every:2 friendly:1 zaremba:1 scaled:1 before:1 t1:1 positive:1 tends:1 limit:1 severely:1 cliff:1 laurent:1 approximately:1 black:2 twice:1 dynamically:3 deployment:2 catanzaro:1 limited:1 decided:1 practical:5 acknowledgment:1 enforces:1 lecun:2 implement:1 digit:1 evan:2 significantly:2 vedaldi:1 refers:1 regular:1 get:1 cannot:2 close:1 selection:2 context:1 applying:3 disconnecting:1 optimize:1 chunpeng:1 imposed:1 map:5 center:2 restriction:2 primitive:1 conv4:4 emily:1 formulate:1 simplicity:1 splitting:1 regularize:8 embedding:1 searching:1 coordinate:2 target:4 suppose:2 hierarchy:1 homogeneous:1 origin:1 trend:2 robertson:1 approximated:1 recognition:7 utilized:3 located:1 tappen:1 cut:1 database:2 observed:2 bottom:2 preprint:11 wang:3 compressing:1 connected:8 hinders:1 sun:1 ranzato:1 trade:2 removed:5 rescaled:1 valuable:1 intuition:1 complexity:1 babak:1 trained:6 interchangeable:1 efficiency:3 eric:1 basis:1 accelerate:1 easily:1 represented:1 fiber:3 various:2 fast:2 klaus:1 hyper:4 caffe:7 whose:2 larger:2 solve:1 cvpr:3 say:1 compressed:4 forwarded:1 statistic:1 simonyan:1 reshaped:1 final:2 sequence:1 eigenvalue:1 karayev:1 net:1 propose:5 regain:1 reconstruction:2 product:2 tran:2 jamie:1 combining:1 degenerate:4 achieve:10 forth:1 caffenet:1 exploiting:2 sutskever:1 foroosh:1 darrell:3 produce:1 converges:2 resnet:28 object:1 andrew:3 measured:2 conv5:4 eq:2 implemented:2 predicted:1 implies:1 indicate:1 concentrate:1 guided:1 saved:2 filter:79 cnns:2 criminisi:1 nando:1 hassan:1 explains:1 require:1 dnns:17 generalization:1 exploring:1 pl:1 great:1 pitt:5 rgen:1 achieves:6 adopt:1 consecutive:1 omitted:1 estimation:1 label:1 ross:2 highway:2 largest:1 grouped:1 weighted:1 offs:1 always:3 reaching:1 denil:2 avoid:2 shelf:3 ax:1 focus:2 june:1 rank:12 indicates:1 mainly:1 greatly:1 sharan:1 baseline:18 kim:1 rupesh:1 eliminate:1 entire:1 compactness:2 hidden:3 dnn:31 going:1 subroutine:1 pixel:2 issue:3 classification:10 xps:1 retaining:2 constrained:2 ssl:67 platform:5 art:2 spatial:3 extraction:1 represents:1 park:1 denton:2 others:1 t2:1 yoshua:1 wen:1 randomly:1 simultaneously:2 resulted:1 floating:1 maintain:1 william:2 freedom:1 detection:1 huge:1 message:1 mlp:7 investigate:2 evaluation:4 adjust:1 nl:4 misha:1 accurate:1 vandermersch:1 partial:1 necessary:1 shorter:1 tree:2 initialized:1 girshick:2 column:11 xeon:7 removable:2 marshall:1 cublas:2 rabinovich:1 cost:4 subset:1 krizhevsky:2 jiashi:1 stored:2 connect:1 adaptively:1 international:2 off:5 pool:1 together:2 wnl:3 ilya:1 lebedev:1 connectivity:1 augmentation:4 adversely:1 marianna:1 wojciech:1 li:2 szegedy:2 converted:1 potential:1 de:1 summarized:1 coding:1 titan:4 jitendra:1 performed:3 dally:2 weinan:1 start:1 xing:1 parallel:1 yandan:1 jia:2 contribution:1 square:1 shakibi:1 accuracy:27 convolutional:29 efficiently:4 vincent:1 ren:1 published:1 processor:1 stroke:2 detector:2 ed:5 trevor:3 against:1 nonetheless:1 geforce:1 associated:2 handwriting:1 improves:1 dimensionality:4 segmentation:1 actually:1 higher:7 methodology:1 specify:1 wei:2 improved:2 zisserman:2 formulation:3 huizi:1 shrink:1 convnets:1 hand:1 sheng:1 propagation:1 indicated:1 grows:2 contain:1 gtx:2 ccf:1 regularization:28 hence:1 lenet:21 spatially:1 semantic:1 illustrated:2 adjacent:1 k40c:1 during:2 width:1 demonstrate:1 performs:1 l1:1 dragomir:1 greff:1 image:8 wise:51 recently:1 physical:1 cohen:1 endpoint:1 exponentially:1 volume:1 discussed:2 slight:1 he:1 significant:2 composition:1 anguelov:1 dinh:1 smoothness:3 tuning:4 similarly:2 zeroing:3 bruna:1 access:2 han:3 etc:1 base:1 add:1 sergio:1 patrick:1 scenario:1 schmidhuber:1 top1:1 lra:7 success:1 yi:1 victor:1 seen:1 converge:1 xiangyu:1 redundant:1 exploding:1 smoother:1 multiple:3 reduces:3 adapt:1 cross:2 long:1 cifar:5 lin:1 bigger:1 controlled:1 impact:3 variant:4 crop:1 regression:2 multilayer:2 essentially:1 vision:4 arxiv:22 cusparse:2 normalization:2 kernel:1 sergey:2 achieved:6 irregular:1 penalize:1 preserved:1 cropped:1 fine:9 huffman:1 source:1 jian:1 modality:1 vadim:1 validating:1 leveraging:1 effectiveness:4 leverage:1 split:1 shotton:1 wn:5 bengio:1 architecture:1 lasso:14 suboptimal:1 reduce:12 decline:1 haffner:1 vgg:1 intensive:1 shift:1 thread:2 expression:1 pca:1 accelerating:1 song:2 karen:1 shaoqing:1 deep:20 ignored:1 generally:2 baoyuan:1 covered:1 unimportant:4 eigenvectors:1 antonio:1 hardware:4 reduced:3 http:1 percentage:2 mirrored:1 nsf:2 cuda:1 dummy:3 per:2 bryan:1 group:28 redundancy:3 sparsifying:1 terminology:1 four:1 demonstrating:1 yangqing:2 penalizing:2 lowering:1 rectangle:1 enforced:1 reasonable:1 wu:1 yann:2 summarizes:4 acceptable:2 duncan:1 entirely:1 layer:76 dropout:1 dash:2 cheng:1 strength:2 xiaogang:1 chetlur:1 constrain:2 alex:2 scene:1 speed:1 min:1 kumar:1 format:2 gpus:1 speedup:36 structured:32 combination:1 poor:1 smaller:3 across:4 remain:1 character:1 wi:1 rob:1 iccv:1 computationally:1 resource:2 tai:1 turn:1 needed:2 drs:1 adopted:1 decomposing:2 operation:1 apply:1 generic:4 appropriate:1 enforce:3 blurry:1 pierre:1 save:2 batch:2 original:6 compress:1 denotes:2 top:6 remaining:1 ioannou:1 exploit:1 concatenated:1 especially:1 society:1 feng:2 tensor:7 malik:1 added:1 damage:2 costly:2 hai:1 cudnn:1 gradient:2 distance:1 convnet:14 thank:1 capacity:1 concatenation:1 code:1 besides:3 useless:1 retained:1 reed:1 ratio:4 sermanet:1 potentially:1 negative:3 rise:1 implementation:1 conv2:8 perform:3 shallower:1 vertical:1 convolution:10 neuron:19 observation:1 enabling:1 philippe:1 sparsified:2 flop:12 extended:2 excluding:1 hinton:1 arbitrary:3 csr:2 namely:1 required:1 kl:13 pair:1 connection:16 imagenet:3 learned:8 barcelona:1 boost:1 nip:1 able:1 beyond:1 usually:2 pattern:6 scott:1 sparsity:76 program:2 including:6 memory:4 explanation:1 max:1 royal:1 critical:1 overlap:1 natural:1 restore:1 regularized:7 predicting:1 residual:5 scheme:2 improve:4 github:1 library:4 axis:1 roberto:1 speeding:1 yani:1 text:1 prior:1 filer:5 joan:1 multiplication:3 loss:9 fully:8 proportional:1 geoffrey:1 remarkable:1 validation:4 shelhamer:2 vanhoucke:1 sufficient:1 consistent:1 conv1:16 zeroed:6 viewpoint:1 xiao:1 heavy:1 row:12 surprisingly:1 last:1 supported:1 profiled:1 bias:2 weaker:1 deeper:4 perceptron:3 conv3:5 sparse:9 benefit:1 boundary:2 depth:26 dimension:6 overcome:1 avoids:1 feedback:1 rich:1 adopts:1 made:1 collection:2 commonly:1 author:1 erhan:1 pruning:7 compact:5 obtains:2 jaderberg:2 cuboid:1 ml:13 overfitting:1 ioffe:1 pittsburgh:5 unnecessary:1 discriminative:1 fergus:1 search:1 why:1 table:16 channel:28 learn:12 bypassed:1 e5:2 expansion:1 bottou:1 cl:18 marc:1 t8:1 dense:2 linearly:2 aurelio:1 big:2 tesla:3 fair:1 intel:4 representative:2 depicts:2 scattered:1 cubic:1 tong:1 structurally:1 position:1 mao:1 explicit:1 learns:1 donahue:2 removing:3 remained:1 down:2 dumitru:1 covariate:1 explored:2 weakest:1 essential:1 mnist:4 quantization:1 effectively:1 woolley:1 mirror:1 justifies:1 illustrates:1 t4:1 demand:1 chen:1 locality:3 rg:3 depicted:1 customization:1 explore:1 visual:1 kaiming:1 cipolla:1 lempitsky:1 identity:1 formulated:1 seyoung:1 acceleration:4 jeff:3 shortcut:7 change:1 except:2 pensky:1 reducing:1 degradation:1 principal:1 total:3 experimental:1 indicating:2 ilsvrc:2 internal:1 jonathan:2 accelerated:2 evaluate:2 regularizing:2 srivastava:1 |
6,086 | 6,505 | Fast Active Set Methods for
Online Spike Inference from Calcium Imaging
1
Johannes Friedrich1,2 , Liam Paninski1
Grossman Center and Department of Statistics, Columbia University, New York, NY
2
Janelia Research Campus, Ashburn, VA
j.friedrich@columbia.edu, liam@stat.columbia.edu
Abstract
Fluorescent calcium indicators are a popular means for observing the spiking activity of large neuronal populations. Unfortunately, extracting the spike train of
each neuron from raw fluorescence calcium imaging data is a nontrivial problem.
We present a fast online active set method to solve this sparse nonnegative deconvolution problem. Importantly, the algorithm progresses through each time series
sequentially from beginning to end, thus enabling real-time online spike inference
during the imaging session. Our algorithm is a generalization of the pool adjacent
violators algorithm (PAVA) for isotonic regression and inherits its linear-time computational complexity. We gain remarkable increases in processing speed: more
than one order of magnitude compared to currently employed state of the art convex
solvers relying on interior point methods. Our method can exploit warm starts;
therefore optimizing model hyperparameters only requires a handful of passes
through the data. The algorithm enables real-time simultaneous deconvolution of
O(105 ) traces of whole-brain zebrafish imaging data on a laptop.
1
Introduction
Calcium imaging has become one of the most widely used techniques for recording activity from
neural populations in vivo [1]. The basic principle of calcium imaging is that neural action potentials
(or spikes), the point process signal of interest, each induce an optically measurable transient response
in calcium dynamics. The nontrivial problem to extract the spike train of each neuron from a raw
fluorescence trace has been addressed with several different approaches, including template matching
[2] and linear deconvolution [3, 4], which are outperformed by sparse nonnegative deconvolution
[5]. The latter can be interpreted as the MAP estimate under a generative model (linear convolution
plus noise; Fig. 1), whereas fully Bayesian methods [6, 7] can provide some further improvements,
but are more computationally expensive. Supervised methods trained on simultaneously-recorded
electrophysiological and imaging data [8, 9] have also recently achieved state of the art results, but
are more black-box in nature.
The methods above are typically applied to imaging data offline, after the experiment is complete;
however, there is a need for accurate and fast real-time processing to enable closed-loop experiments, a
powerful strategy for causal investigation of neural circuitry [10]. In particular, observing and feeding
back the effects of circuit interventions on physiologically relevant timescales will be valuable for
directly testing whether inferred models of dynamics, connectivity, and causation are accurate in vivo,
and recent experimental advances [11, 12] are now enabling work in this direction. Brain-computer
interfaces (BCIs) also rely on real-time estimates of neural activity. Whereas most BCI systems rely
on electrical recordings, BCIs have been driven by optical signals too [13], providing new insight
into how neurons change their activity during learning on a finer spatial scale than possible with
intracortical electrodes. Finally, adaptive experimental design approaches [14, 15, 16] also rely on
online estimates of neural activity.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Fluorescence
s
c
y
2
1
0
0
150
Time
300
Figure 1: Generative autoregressive model for calcium dynamics. Spike train s gets filtered to produce calcium
trace c; here we used p = 2 as order of the AR process. Added noise yields the observed fluorescence y.
Even in cases where we do not require the strict timing/latency constraints of real-time processing,
we still need methods that scale to large data sets as for example in whole-brain imaging of larval
zebrafish [17, 18]. A further demand for scalability stems from the fact that the deconvolution
problem is solved in the inner loop of constrained nonnegative matrix factorization (CNMF) [19], the
current state of the art for simultaneous denoising, deconvolution, and demixing of spatiotemporal
calcium imaging data.
In this paper we address the pressing need for scalable online spike inference methods. We build
on the success of framing spike inference as a sparse nonnegative deconvolution problem. Current
algorithms employ interior point methods to solve the ensuing optimization problem and are fast
enough to process hundreds of neurons in about the same time as the recording [5], but will not scale
to currently obtained larger data sets such as whole-brain zebrafish imaging. Furthermore, these
interior point methods scale linearly, but they cannot be warm started, i.e. be initialized with the
solution from a previous iteration to gain speed-ups, and do not run online.
We noted a close connection between the MAP problem and isotonic regression, which fits data
by a monotone piecewise constant function. A classic isotonic regression algorithm is the pool
adjacent violators algorithm (PAVA) [20, 21], which sweeps through the data looking for violations
of the monotonicity constraint. When it finds one, it adjusts the estimate to the best possible fit with
constraints, which amounts to pooling data points with the same fitted value. During the sweep
adjacent pools that violate the constraints are merged. We generalized PAVA to derive an Online
Active Set method to Infer Spikes (OASIS) that yields speed-ups in processing time by at least one
order of magnitude compared to interior point methods on both simulated and real data. Further,
OASIS can be warm-started, which is useful in the inner loop of CNMF, and also when adjusting
model hyperparameters, as we show below. Importantly, OASIS is not only much faster, but operates
in an online fashion, progressing through the fluorescence time series sequentially from beginning to
end. The advances in speed paired with the inherently online fashion of the algorithm enable true
real-time online spike inference during the imaging session, with the potential to significantly impact
experimental paradigms. We expect our algorithm to be a useful tool for the neuroscience community,
to enable new experiments that online access to spike timings affords and to be of interest in other
fields, such as physics and quantitative finance, that deal with jump diffusion processes.
The rest of this paper is organized as follows: Section 2 introduces the autoregressive model for
calcium dynamics. In Section 3 we derive our active set method for the sparse nonnegative deconvolution problem for the simple case of AR(1) dynamics and generalize it to arbitrary AR(p) processes
in the Supplementary Material. We further use the problem?s dual formulation to adjust the sparsity
level in a principled way (following [19]), and describe methods for fitting model hyperparameters
including the coefficients of the AR process. In Section 4 we show some results on simulated as well
as real data. Finally, in Section 5 we conclude with possible further extensions.
2
Autoregressive model for calcium dynamics
We assume we observe the fluorescence signal for T timesteps, and denote by st the number of spikes
that the neuron fired at the t-th timestep, t = 1, ..., T , cf. Figure 1. We approximate the calcium
concentration dynamics c using a stable autoregressive process of order p (AR(p)) where p is a small
positive integer, usually p = 1 or 2,
p
X
ct =
?i ct?i + st .
(1)
i=1
The observed fluorescence y ? RT is related to the calcium concentration as [5, 6, 7]:
yt = a ct + t , t ? N (0, ? 2 )
2
(2)
where a is a nonnegative scalar and the noise is assumed to be i.i.d. zero mean Gaussian with variance
? 2 . For the remainder we assume units such that a = 1 without loss of generality. The parameters ?i
and ? can be estimated from the autocovariance function and the power spectral density (PSD) of
y respectively [19]. The autocovariance approach assumes that the spiking signal s comes from a
homogeneous Poisson process and in practice often gives a crude estimate of ?i . We will improve on
this below (Fig. 3) by fitting the AR coefficients directly, which leads to better estimates, particularly
when the spikes have some significant autocorrelation.
The goal of calcium deconvolution is to extract an estimate of the neural activity s from the vector of
observations y. As discussed in [5, 19], this leads to the following nonnegative LASSO problem for
estimating the calcium concentration:
minimize
c
1
2 kc
? yk2 + ?ksk1
subject to s = Gc ? 0
(3)
where the `1 penalty enforces sparsity of the neural activity and the lower triangular matrix G is
defined as:
?
?
1
??1
???
2
G=?
? ..
.
0
0
1
??1
..
.
...
0
0
1
..
.
??2
...
...
...
..
.
??1
0
0
0?
?
.?
.
.
1
(4)
Following the approach in [5] the spike signal s is relaxed from nonnegative integers to arbitrary
nonnegative values.
3
Derivation of the active set algorithm
The optimization problem (3) could be solved using generic convex program solvers. Here we derive
the much faster Online Active Set method to Infer Spikes (OASIS).
3.1
Online Active Set method to Infer Spikes (OASIS)
For simplicity we consider first the AR(1) model and defer the cumbersome general case p > 1 to the
Supplementary Material. We begin by inserting the definition of s (Eq. 3, skipping the index of ? for
a single AR coefficient). Using that s is constrained to be nonnegative yields for the sparsity penalty
?ksk1 = ?1>s = ?
T X
T
X
t=1 k=1
Gk,t ct = ?
T
T
X
X
(1 ? ? + ??tT )ct =
?t ct = ?>c
t=1
(5)
t=1
with ?t := ?(1 ? ? + ??tT ) (with ? denoting Kronecker?s delta) by noting that the sum of the last
column of G is 1, whereas all other columns sum to (1 ? ?). Now the problem
minimize
c
T
T
X
1X
(ct ? yt )2 +
?t ct
2 t=1
t=1
subject to ct+1 ? ?ct ? 0
?t
(6)
shares some similarity to isotonic regression with the constraint ct+1 ? ct . However, our constraint
ct+1 ? ?ct bounds the rate of decay instead of enforcing monotonicity. We generalize PAVA to
handle the additional factor ?. The algorithm is based on the following: For an optimal solution, if
yt < ?yt?1 , then the constraint becomes active and holds with equality, ct = ?ct?1 . (Supposing
the opposite, i.e. ct > ?ct?1 , we could move ct?1 and ct by some small to decreases the objective
without violating the constraints, yielding a proof by contradiction.)
We first present the algorithm in a way that conveys its core ideas, then improve the algorithm?s
efficiency by introducing ?pools? of variables (adjacent ct values) which are updated simultaneously.
We introduce temporary values c0 and initialize them to the unconstrained least squares solution,
c0 = y ? ?. Initially all constraints are in the ?passive set? and possible violations are fixed by
subsequently adding the respective constraints to the ?active set.? Starting at t = 2 one moves
forward until a violation of the constraint c0? ? ?c0? ?1 at some time ? is detected (Fig. 2A). Now
the constraint is added to the active set and enforced by setting c0? = ?c0? ?1 . Updating the two
time steps by minimizing 12 (y? ?1 ? c0? ?1 )2 + 21 (y? ? ?c0? ?1 )2 + ?? ?1 c0? ?1 + ?? ?c0? ?1 yields an
updated value c0? ?1 . However, this updated value can violate the constraint c0? ?1 ? ?c0? ?2 and we
need to update c0? ?2 as well, etc., until we have backtracked some ?t steps to time t? = ? ? ?t
3
A move forward 7 B
track back 3
C move forward 3 D move forward 3 E move forward 7
track back 7
track back 3
H move forward 3
F
G
I
track back 3
...
Figure 2: Illustration of OASIS for an AR(1) process (see Supplementary Video). Red lines depict true spike
times. The shaded background shows how the time points are gathered in pools. The pool currently under
consideration is indicated by the blue crosses. A constraint violation is encountered for the second time step (A)
leading to backtracking and merging (B). The algorithm proceeds moving forward (C-E) until the next violation
occurs (E) and triggers backtracking and merging (F-G) as long as constraints are violated. When the most
recent spike time has been reached (G) the algorithm proceeds forward again (H). The process continues until
the end of the series has been reached (I). The solution is obtained and pools span the inter-spike-intervals.
where the constraint c0t? ? ?c0t??1 is already valid. At most one needs to backtrack to the most recent
spike, because c0t? > ?c0t??1 at spike times t? (Eq. 1). (Because such delays could be too long for some
interesting closed loop experiments, we show in the Supplementary Material how well the method
performs if backtracking is limited to just few frames.) Solving
?t
?t
X
1X t 0
2
?t+t?? t c0t?
(7)
(? ct? ? yt+t?) +
minimize
c0t?
2 t=0
t=0
by setting the derivative to zero yields
c0t?
P?t
=
t
t=0 (yt+t? ? ?t+t?)?
P?t 2t
t=0 ?
0
to ct?+t = ? t c0t? for t = 1, ..., ?t.
(8)
and the next values are updated according
(Along the way it is worth
noting that, because a spike induces a calcium response described by kernel h with components
h>
(y??)
?:?
t
h1+t = ? t , c0t? could be expressed in the more familiar regression form as h1:?t+1
, where
>
1:?t+1 h1:?t+1
we used the notation vi:j to describe a vector formed by components i to j of v.) Now one moves
forward again (Fig. 2C-E) until detection of the next violation (Fig. 2E), backtracks again to the most
recent spike (Fig. 2G), etc. Once the end of the time series is reached (Fig. 2I) we have found the
optimal solution and set c = c0 .
In a worst case situation a constraint violation is encountered at every step of the forward sweep
PT
through the series. Updating all t values up to time t yields overall t=2 t = T (T2+1) ? 1 updates
and an O(T 2 ) algorithm. In order to obtain a more efficient algorithm we introduce pools which are
tuples of the form (vi , wi , ti , li ) with value vi , weight wi , event time ti and pool length li . Initially
there is a pool (yt ? ?t , 1, t, 1) for each time step t. During backtracking pools get combined and only
the first value vi = c0ti is explicitly considered, while the other values are merely defined implicitly
via ct+1 = ?ct . The constraint ct+1 ? ?ct translates to vi+1 ? ? li vi as the criterion determining
whether pools need to be combined. The introduced weights allow efficient value updates whenever
pools are merged by avoiding recalculating the sums in equation (8). Values are updated according to
wi vi + ? li wi+1 vi+1
vi ?
(9)
wi + ? 2li wi+1
where the denominator is the new weight of the pool and the pool lengths are summed
wi ? wi + ? 2li wi+1
(10)
li ? li + li+1 .
(11)
4
Whenever pools i and i + 1 are merged, former pool i + 1 is removed and the succeeding pool
indices decreased by 1. It is easy to prove by induction that the updates according to equations
(9-11) guarantee that equation (8) holds for all values (see Supplementary Material) without having to
explicitly calculate it. The latter would be expensive for long pools, whereas merging two pools has
O(1) complexity independent of the pool lengths. With pooling the considered worst case situation
results in a single pool that is updated at every step forward, yielding O(T ) complexity. Analogous
to PAVA, the updates solve equation (6) not just greedily but optimally. The final algorithm is
summarized in Algorithm 1 and illustrated in Figure 2 as well as in the Supplementary Video.
Algorithm 1 Fast online deconvolution algorithm for AR(1) processes with positive jumps
Require: data y, decay factor ?, regularization parameter ?
1: initialize pools as P = {(vi , wi , ti , li )}Ti=1 ? {(yt ? ?(1 ? ? + ??tT ), 1, t, 1)}Tt=1 and let i ? 1
2: while i < |P| do
. iterate until end
3:
while i < |P| and vi+1 ? ? li vi do i ? i + 1
. move forward
4:
if i == |P| then break
5:
while i >
0 and vi+1 < ? li vi do
. track back
w v +? li w
v
i i
i+1 i+1
6:
Pi ?
, wi + ? 2li wi+1 , ti , li + li+1
wi +? 2li wi+1
7:
remove Pi+1
8:
i?i?1
9:
i?i+1
10: for (v, w, t, l) in P do
11:
for ? = 0, ..., l ? 1 do ct+? ? ? ? max(0, v)
12: return c
3.2
. Eqs. (9-11)
. construct solution for all t
. enforce ct ? 0 via max
Dual formulation with hard noise constraint
The formulation above contains a troublesome free sparsity parameter ? (implicit in ?). A more
robust deconvolution approach eliminates it by inclusion of the residual sum of squares (RSS) as a
hard constraint and not as a penalty term in the objective function [19]. The expected RSS satisfies
hkc ? yk2 i = ? 2 T and by the law of large numbers kc ? yk2 ? ? 2 T with high probability, leading
to the constrained problem
minimize ksk1
c
subject to s = Gc ? 0
and kc ? yk2 ? ? 2 T.
(12)
(As noted above, we estimate ? using the power spectral estimator described in [19].) We will solve
this problem by increasing ? in the dual formulation until the noise constraint is tight. We start with
some small ?, e.g. ? = 0, to obtain a first partitioning into pools P, cf. Figure 3A below. From
equations (8-10) (and see also S11) along with the definition of ? (Eq. 5) it follows that given the
solution (vi , wi , ti , li ), where
Pli ?1
Pli ?1
(yt +t ? ?ti +t )? t
(yti +t ? ?(1 ? ? + ??ti +t,T ))? t
vi = t=0 Pili ?1
= t=0
2t
wi
t=0 ?
for some ?, the solution (vi0 , wi0 , t0i , li0 ) for ? + ?? is
Pli ?1
(1 ? ? + ??ti +t,T )? t
1 ? ? li (1 ? ?iz )
0
vi = vi ? ?? t=0
= vi ? ??
wi
wi
(13)
where z = |P| is the index of the last pool and because pools are updated independently we make
the approximation that no changes in the pool structure occur. Inserting equation (13) into the noise
constraint (Eq. 12) results in
2
z lX
i ?1
X
1 ? ? li (1 ? ?iz )
t
vi ? ??
? ? yti +t
= ?2 T
(14)
w
i
i=1 t=0
?
P
P
??+ ? 2 ?4?
2
and solving the quadratic equation yields ?? =
with ? = i,t ?it
, ? = 2 i,t ?it ?it
2?
li
P
iz ) t
and = i,t ?2it ? ? 2 T where ?it = 1?? w(1??
? and ?it = yti +t ? vi ? t .
i
5
A
run Alg. 1 2
C
??
0
?? ?
run Alg. 1
RSS
D ? 2T
E
?
?
??
?
run Alg. 1
Fluorescence
RSS
B ? 2T
Truth
Estimate
Data
Correlation: 0.734
0
2
Correlation: 0.753
0
2
Correlation: 0.767
0
2
Correlation: 0.777
0
2
Correlation: 0.791
0
..
.
Iterate B ? E
F
?3 iterations 2
to converge 0
true ?
10
? from autocovariance
20
Correlation: 0.849
Time [s]
30
40
Figure 3: Optimizing sparsity parameter ? and AR coefficient ?. (A) Running the active set method, with
conservatively small estimate of ?, yields an initial denoised estimate (blue) of the data (yellow) roughly
capturing the truth (red). We also report the correlation between the deconvolved estimate and true spike train
as direct measure for the accuracy of spike train inference. (B) Updating sparsity parameter ? according to
Eq. (14) such that RSS = ? 2 T (left) shifts the current estimate downward (right, blue). (C) Running the active
set method enforces the constraints again and is fast due to warm-starting. (D) Updating ? by minimizing the
polynomial function RSS(?) and (E) running the warm-started active set method completes one iteration, which
yields already a decent fit. (F) A few more iterations improve the solution further and the obtained estimate is
hardly distinguishable from the one obtained with known true ? (turquoise dashed on top of blue solid line).
Note that determining ? based on the autocovariance (purple) yields a crude solution that even misses spikes (at
24.6 s and 46.5 s).
The solution ?? provides a good approximate proposal step for updating the pool values vi (using
Eq. 13). Since this update proposal is only approximate it can give rise to violated constraints (e.g.,
negative values of vi ). To satisfy all constraints Algorithm 1 is run to update the pool structure, cf.
Figure 3C, but with a warm start: we initialize with the current set of merely z pools P 0 instead of the
T pools for a cold start (Alg. 1, line 1). This step returns a set of vi values that satisfy the constraints
and may merge pools (i.e., delete spikes); then the procedure (update ? then rerun the warm-started
Algorithm 1) can be iterated until no further pools need to be merged, at which point the procedure
has converged. In practice this leads to an increasing sequence of ? values (corresponding to an
increasingly sparse set of spikes), and no pool-split (i.e., add-spike) moves are necessary1 .
This warm-starting approach brings major speed benefits: after the residual is updated following a
? update, the computational cost of the algorithm is linear in the number of pools z, hence warm
starting drastically reduces computational costs from k1 T to k2 z with proportionality constants k1
and k2 : if no pool boundary updates are needed then after warm starting the algorithm only needs to
pass once through all pools to verify that no constraint is violated, whereas a cold start might involve
a couple passes over the data to update pools, so k2 is typically significantly smaller than k1 , and z is
typically much smaller than T (especially in sparsely-spiking regimes).
3.3
Optimizing the AR coefficient
Thus far the parameter ? has been known or been estimated based on the autocovariance function.
We can improve upon this estimate by optimizing ? as well, which is illustrated in Figure 3. After
updating ? followed by running Algorithm 1, we perform a coordinate descent step in ? that minimizes
the RSS, cf. Figure 3D. The RSS as a function of ? is a high order polynomial, cf. equation (8), and
we need to settle for numerical solutions. We used Brent?s method [22] with bounds 0 ? ? < 1. One
iteration consists now of steps B-E in Figure 3, while for known ? only B-C were necessary.
1
Note that it is possible to cheaply detect any violations of the KKT conditions in a candidate solution; if
such a violation is detected, the corresponding pool could be split and the warm-started Algorithm 1 run locally
near the detected violations. However, as we noted, due to the increasing ? sequence we did not find this step to
be necessary in the examples examined here.
6
Activity
Fluor.
B
OASIS
2
CVXPY
Truth
D
Data
1
Time [s]
1.0 0.1
0
1
0.01
0.5
0
0
25
C
6
E
2
0
1
0
0
Time [s]
30
0
Time [s]
O. E. M. S. G.
0
OASIS ECOS MOSEK SCS GUROBI
50
Time [s]
Activity
Fluor.
A
30
1
0.1
1
0.01
O. E. M. S. G.
0
OASIS ECOS MOSEK SCS GUROBI
Solver
Figure 4: OASIS produces the same high quality results as convex solvers at least an order of magnitude faster.
(A) Raw and inferred traces for simulated AR(1) data, (B) simulated AR(2) and (C) real data from [29] modeled
as AR(2) process. OASIS solves equation (3) exactly for AR(1) and just approximately for AR(2) processes,
nevertheless well extracting spikes. (D) Computation time for simulated AR(1) data with given ? (blue circles,
Eq. 3) or inference with hard noise constraint (green x, Eq. 12). GUROBI failed on the noise constrained
problem. (E) Computation time for simulated AR(2) data.
4
4.1
Results
Benchmarking OASIS
We generated datasets of 20 fluorescence traces each for p = 1 and 2 with a duration of 100 s at
a framerate of 30 Hz, such that T = 3,000 frames. The spiking signal came from a homogeneous
Poisson process. We used ? = 0.95, ? = 0.3 for the AR(1) model and ?1 = 1.7, ?2 = ?0.712,
? = 1 for the AR(2) model. Figures 4A-C are reassuring that our suggested (dual) active set method
yields indeed the same results as other convex solvers for an AR(1) process and that spikes are
extracted well. For an AR(2) process OASIS is greedy and yields good results that are similar to the
one obtained with convex solvers (lower panels in Fig. 4B and C), with virtually identical denoised
fluorescence traces (upper panels). An exact fast (primal) active set method method for this case is
presented in the extended journal version of this paper [23].
Figures 4D,E report the computation time (?SEM) averaged over all 20 traces and ten runs per trace
on a MacBook Pro with Intel Core i5 2.7 GHz CPU. We compared the run time of our algorithm
to a variety of state of the art convex solvers that can all be conveniently called from the convex
optimization toolbox CVXPY [24]: embedded conic solver (ECOS, [25]), MOSEK [26], splitting
conic solver (SCS, [27]) and GUROBI [28]. With given sparsity parameter ? (Eq. 3) OASIS is about
two magnitudes faster than any other method for an AR(1) process (Fig. 4D, blue disks) and more
than one magnitude for an AR(2) process (Fig. 4E). Whereas the other solvers take almost the same
time for the noise constrained problem (Eq. 12, Fig. 4D,E, green x), our method takes about three
times longer to find the value of the dual variable ? compared to the formulation where the residual is
part of the objective; nevertheless it still outperforms the other algorithms by a huge margin.
We also ran the algorithms on longer traces of length T = 30,000 frames, confirming that OASIS
scales linearly with T . Our active set method maintained its lead by 1-2 orders of magnitude in
computing time. Further, compared to our active set method the other algorithms required at least an
order of magnitude more RAM, confirming that OASIS is not only faster but much more memory
efficient. Indeed, because OASIS can run in online mode the memory footprint can be O(1), instead
of O(T ).
We verified these results on real data as well. Running OASIS with the hard noise constraint and
p = 2 on the GCaMP6s dataset collected at 60 Hz from [29] took 0.101 ? 0.005 s per trace, whereas
the fastest other methods required 2.37 ? 0.12 s. Figure 4C shows the real data together with the
inferred denoised and deconvolved traces as well as the true spike times, which were obtained by
simultaneous electrophysiological recordings [29].
We also extracted each neuron?s fluorescence activity using CNMF from an unpublished whole-brain
zebrafish imaging dataset from the M. Ahrens lab. Running OASIS with hard noise constraint and
7
p = 1 (chosen because the calcium onset was fast compared to the acquisition rate of 2 Hz) on 10,000
traces out of a total of 91,478 suspected neurons took 81.5 s whereas ECOS, the fastest competitor,
needed 2,818.1 s. For all neurons we would hence expect 745 s for OASIS, which is below the 1,500 s
recording duration, and over 25,780 s for ECOS and other candidates.
4.2
Hyperparameter optimization
We have shown that we can solve equation (3) and equation (12) faster than interior point methods.
The AR coeffient ? was either known or estimated based on the autocorrelation in the above analyses.
The latter approach assumes that the spiking signal comes from a homogeneous Poisson process,
which does not generally hold for realistic data. Therefore we were interested in optimizing not
only the sparsity parameter ?, but also the AR(1) coeffient ?. To illustrate the optimization of both,
we generated a fluorescence trace with spiking signal from an inhomogeneous Poisson process
with sinusoidal instantaneous firing rate (Fig. 3), thus mimicking realistic data. We conservatively
initialized ? to a small value of 0.9. The value obtained based on the autocorrelation was 0.9792
and larger than the true value of 0.95. The left panels in Figures 3B and D illustrate the update of
? from the previous value ?? to ?? by solving a quadratic equation analytically (Eq. 14) and the
update of ? by numerical minimization of a high order polynomial respectively. Note that after
merely one iteration (Fig. 3E) a good solution is obtained and after three iterations the solution is
virtually identical to the one obtained when the true value of ? has been provided (Fig. 3F). This
holds not only visually, but also when judged by the correlation between deconvolved activity and
ground truth spike train, which was 0.869 compared to merely 0.773 if ? was obtained based on the
autocorrelation. The optimization was robust to the initial value of ?, as long as it was positive and
not, or only marginally, greater than the true value. The value obtained based on the autocorrelation
was considerably greater and partitioned the time series into pools in a way that missed entire spikes.
A quantification of the computing time for hyperparameter optimization as well as means to reduce it
are presented in the extended journal version [23].
5
Conclusions
We presented an online active set method for spike inference from calcium imaging data. We assumed
that the forward model to generate a fluorescence trace from a spike train is linear-Gaussian. Future
work will extend the method to nonlinear models [30] incorporating saturation effects and a noise
variance that increases with the mean fluorescence to better resemble the Poissonian statistics of
photon counts. In the Supplementary Material we already extend our mathematical formulation to
include weights for each time point as a first step in this direction.
Further development, contained in the extended journal version [23], includes and optimizes an
explicit fluorescence baseline. It also provides means to speed up the optimization of model hyperparameters, including the added baseline. It presents an exact and fast (primal) active set method for
AR(p > 1) processes and more general calcium response kernels. A further extension is to add the
constraint that positive spikes need to be larger than some minimal value, which renders the problem
non-convex. A minor modification to our algorithm enables it to find an (approximate) solution of this
non-convex problem, which can be marginally better than the solution obtained with `1 regularizer.
Acknowledgments
We would like to thank Misha Ahrens and Yu Mu for providing whole-brain imaging data of larval
zebrafish. We thank John Cunningham for fruitful discussions and Scott Linderman as well as Daniel
Soudry for valuable comments on the manuscript.
Funding for this research was provided by Swiss National Science Foundation Research Award
P300P2_158428, Simons Foundation Global Brain Research Awards 325171 and 365002, ARO
MURI W911NF-12-1-0594, NIH BRAIN Initiative R01 EB22913 and R21 EY027592, DARPA
N66001- 15-C-4032 (SIMPLEX), and a Google Faculty Research award; in addition, this work
was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of
Interior/ Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S. Government
is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any
copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those
of the authors and should not be interpreted as necessarily representing the official policies or
endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government.
8
References
[1] C Grienberger and C Konnerth. Imaging calcium in neurons. Neuron, 73(5):862?885, 2012.
[2] B F Grewe, D Langer, H Kasper, B M Kampa, and F Helmchen. High-speed in vivo calcium imaging
reveals neuronal network activity with near-millisecond precision. Nat Methods, 7(5):399?405, 2010.
[3] E Yaksi and R W Friedrich. Reconstruction of firing rate changes across neuronal populations by temporally
deconvolved Ca2+ imaging. Nat Methods, 3(5):377?383, 2006.
[4] T F Holekamp, D Turaga, and T E Holy. Fast three-dimensional fluorescence imaging of activity in neural
populations by objective-coupled planar illumination microscopy. Neuron, 57(5):661?672, 2008.
[5] J T Vogelstein et al. Fast nonnegative deconvolution for spike train inference from population calcium
imaging. J Neurophysiol, 104(6):3691?3704, 2010.
[6] J T Vogelstein, B O Watson, A M Packer, R Yuste, B Jedynak, and L Paninski. Spike inference from
calcium imaging using sequential monte carlo methods. Biophys J, 97(2):636?655, 2009.
[7] E A Pnevmatikakis, J Merel, A Pakman, and L Paninski. Bayesian spike inference from calcium imaging
data. Asilomar Conference on Signals, Systems and Computers, 2013.
[8] T Sasaki, N Takahashi, N Matsuki, and Y Ikegaya. Fast and accurate detection of action potentials from
somatic calcium fluctuations. J Neurophysiol, 100(3):1668?1676, 2008.
[9] L Theis et al. Benchmarking spike rate inference in population calcium imaging. Neuron, 90(3):471?482,
2016.
[10] L Grosenick, J H Marshel, and K Deisseroth. Closed-loop and activity-guided optogenetic control. Neuron,
86(1):106?139, 2015.
[11] J P Rickgauer, K Deisseroth, and D W Tank. Simultaneous cellular-resolution optical perturbation and
imaging of place cell firing fields. Nat Neurosci, 17(12):1816?1824, 2014.
[12] A M Packer, L E Russell, H WP Dalgleish, and M H?usser. Simultaneous all-optical manipulation and
recording of neural circuit activity with cellular resolution in vivo. Nat Methods, 12(2):140?146, 2015.
[13] K B Clancy, A C Koralek, R M Costa, D E Feldman, and J M Carmena. Volitional modulation of optically
recorded calcium signals during neuroprosthetic learning. Nat Neurosci, 17(6):807?809, 2014.
[14] J Lewi, R Butera, and L Paninski. Sequential optimal design of neurophysiology experiments. Neural
Comput, 21(3):619?687, 2009.
[15] M Park and J W Pillow. Bayesian active learning with localized priors for fast receptive field characterization. In Adv Neural Inf Process Syst, pages 2348?2356, 2012.
[16] B Shababo, B Paige, A Pakman, and L Paninski. Bayesian inference and online experimental design for
mapping neural microcircuits. In Adv Neural Inf Process Syst, pages 1304?1312, 2013.
[17] M B Ahrens, M B Orger, D N Robson, J M Li, and P J Keller. Whole-brain functional imaging at cellular
resolution using light-sheet microscopy. Nat Methods, 10(5):413?420, 2013.
[18] N Vladimirov et al. Light-sheet functional imaging in fictively behaving zebrafish. Nat Methods, 2014.
[19] E A Pnevmatikakis et al. Simultaneous denoising, deconvolution, and demixing of calcium imaging data.
Neuron, 89(2):285?299, 2016.
[20] M Ayer et al. An empirical distribution function for sampling with incomplete information. Ann Math Stat,
26(4):641?647, 1955.
[21] R E Barlow, D J Bartholomew, JM Bremner, and H D Brunk. Statistical inference under order restrictions:
The theory and application of isotonic regression. Wiley New York, 1972.
[22] R P Brent. Algorithms for Minimization Without Derivatives. Courier Corporation, 1973.
[23] J Friedrich, P Zhou, and L Paninski. Fast active set methods for online deconvolution of calcium imaging
data. arXiv, 1609.00639, 2016.
[24] S Diamond and S Boyd. CVXPY: A Python-embedded modeling language for convex optimization. J
Mach Learn Res, 17(83):1?5, 2016.
[25] A Domahidi, E Chu, and S Boyd. ECOS: An SOCP solver for embedded systems. In European Control
Conference (ECC), pages 3071?3076, 2013.
[26] E D Andersen and K D Andersen. The MOSEK interior point optimizer for linear programming: an
implementation of the homogeneous algorithm. In High performance optimization, pages 197?232.
Springer, 2000.
[27] B O?Donoghue, E Chu, N Parikh, and S Boyd. Conic optimization via operator splitting and homogeneous
self-dual embedding. J Optim Theory Appl, pages 1?27, 2016.
[28] Gurobi Optimization Inc. Gurobi optimizer reference manual, 2015.
[29] T-W Chen et al. Ultrasensitive fluorescent proteins for imaging neuronal activity. Nature, 499(7458):295?
300, 2013.
[30] T A Pologruto, R Yasuda, and K Svoboda. Monitoring neural activity and [Ca2+ ] with genetically encoded
Ca2+ indicators. J Neurosci, 24(43):9572?9579, 2004.
9
| 6505 |@word neurophysiology:1 faculty:1 version:3 polynomial:3 c0:15 disk:1 proportionality:1 r:8 holy:1 solid:1 deisseroth:2 initial:2 series:6 contains:1 optically:2 daniel:1 denoting:1 outperforms:1 ksk1:3 current:4 optim:1 skipping:1 chu:2 john:1 numerical:2 realistic:2 confirming:2 enables:2 remove:1 succeeding:1 update:13 depict:1 generative:2 greedy:1 intelligence:1 beginning:2 shababo:1 core:2 filtered:1 provides:2 characterization:1 math:1 lx:1 mathematical:1 along:2 direct:1 become:1 initiative:1 prove:1 consists:1 fitting:2 autocorrelation:5 introduce:2 inter:1 indeed:2 expected:1 roughly:1 brain:9 relying:1 cpu:1 jm:1 solver:11 increasing:3 becomes:1 spain:1 estimating:1 campus:1 begin:1 circuit:2 laptop:1 notation:1 panel:3 project:1 provided:2 interpreted:2 minimizes:1 r21:1 grienberger:1 corporation:1 guarantee:1 quantitative:1 every:2 ti:9 finance:1 exactly:1 k2:3 partitioning:1 unit:1 control:2 intervention:1 positive:4 thereon:1 ecc:1 timing:2 soudry:1 troublesome:1 kampa:1 mach:1 firing:3 fluctuation:1 merge:1 approximately:1 black:1 plus:1 might:1 modulation:1 examined:1 kasper:1 shaded:1 appl:1 fastest:2 factorization:1 limited:1 liam:2 averaged:1 jedynak:1 acknowledgment:1 enforces:2 testing:1 practice:2 lewi:1 swiss:1 footprint:1 cold:2 procedure:2 empirical:1 significantly:2 matching:1 ups:2 courier:1 induce:1 boyd:3 protein:1 get:2 cannot:1 interior:8 close:1 sheet:2 judged:1 operator:1 isotonic:5 restriction:1 measurable:1 map:2 fruitful:1 center:2 yt:9 starting:5 independently:1 convex:10 duration:2 resolution:3 keller:1 simplicity:1 splitting:2 contradiction:1 insight:1 adjusts:1 estimator:1 importantly:2 population:6 classic:1 handle:1 coordinate:1 embedding:1 analogous:1 updated:8 pli:3 pt:1 trigger:1 svoboda:1 exact:2 programming:1 homogeneous:5 expensive:2 particularly:1 updating:6 continues:1 sparsely:1 muri:1 observed:2 electrical:1 paninski1:1 solved:2 worst:2 calculate:1 adv:2 decrease:1 removed:1 russell:1 valuable:2 ran:1 principled:1 mu:1 complexity:3 dynamic:7 trained:1 solving:3 tight:1 upon:1 efficiency:1 neurophysiol:2 cnmf:3 darpa:1 regularizer:1 derivation:1 train:8 fast:14 describe:2 monte:1 doi:2 detected:3 sc:3 encoded:1 widely:1 solve:5 larger:3 supplementary:7 bci:1 triangular:1 statistic:2 grosenick:1 final:1 online:18 sequence:2 pressing:1 took:2 aro:1 reconstruction:1 remainder:1 inserting:2 relevant:1 loop:5 fired:1 scalability:1 carmena:1 electrode:1 produce:2 derive:3 illustrate:2 stat:2 minor:1 progress:1 eq:12 solves:1 orger:1 resemble:1 come:2 direction:2 guided:1 inhomogeneous:1 merged:4 subsequently:1 transient:1 enable:3 settle:1 material:5 require:2 government:2 feeding:1 generalization:1 investigation:1 koralek:1 larval:2 extension:2 d16pc00003:1 hold:4 considered:2 ground:1 visually:1 mapping:1 circuitry:1 major:1 optimizer:2 purpose:1 robson:1 outperformed:1 currently:3 fluorescence:16 pnevmatikakis:2 helmchen:1 tool:1 minimization:2 gaussian:2 zhou:1 inherits:1 improvement:1 greedily:1 baseline:2 progressing:1 detect:1 inference:14 typically:3 entire:1 initially:2 cunningham:1 kc:3 reproduce:1 interested:1 mimicking:1 rerun:1 dual:6 overall:1 tank:1 development:1 art:4 spatial:1 constrained:5 initialize:3 summed:1 field:3 once:2 construct:1 having:1 sampling:1 identical:2 park:1 yu:1 mosek:4 future:1 simplex:1 t2:1 report:2 piecewise:1 ibc:2 employ:1 causation:1 few:2 simultaneously:2 national:1 packer:2 familiar:1 psd:1 detection:2 interest:2 huge:1 adjust:1 violation:10 introduces:1 recalculating:1 yielding:2 misha:1 primal:2 copyright:1 light:2 accurate:3 konnerth:1 necessary:2 respective:1 autocovariance:5 vi0:1 incomplete:1 initialized:2 circle:1 re:1 causal:1 deconvolved:4 fitted:1 delete:1 minimal:1 column:2 optogenetic:1 modeling:1 ar:28 w911nf:1 cost:2 introducing:1 hundred:1 delay:1 too:2 optimally:1 spatiotemporal:1 considerably:1 combined:2 st:2 density:1 cvxpy:3 contract:1 physic:1 pool:40 together:1 connectivity:1 again:4 andersen:2 recorded:2 brent:2 derivative:2 leading:2 grossman:1 return:2 li:22 syst:2 takahashi:1 potential:3 sinusoidal:1 photon:1 intracortical:1 distribute:1 socp:1 summarized:1 includes:1 coefficient:5 inc:1 li0:1 satisfy:2 explicitly:2 vi:24 onset:1 h1:3 break:1 closed:3 lab:1 observing:2 view:1 red:2 start:5 reached:3 denoised:3 dalgleish:1 annotation:1 defer:1 simon:1 vivo:4 minimize:4 square:2 formed:1 accuracy:1 purple:1 variance:2 disclaimer:1 yield:12 gathered:1 yellow:1 ecos:6 generalize:2 raw:3 bayesian:4 iterated:1 backtrack:1 marginally:2 carlo:1 monitoring:1 worth:1 finer:1 converged:1 simultaneous:6 cumbersome:1 whenever:2 manual:1 definition:2 competitor:1 acquisition:1 conveys:1 proof:1 couple:1 gain:2 costa:1 macbook:1 adjusting:1 popular:1 dataset:2 usser:1 electrophysiological:2 organized:1 back:6 manuscript:1 supervised:1 violating:1 planar:1 response:3 ayer:1 brunk:1 formulation:6 box:1 microcircuit:1 generality:1 furthermore:1 just:3 implicit:1 until:8 correlation:8 nonlinear:1 google:1 mode:1 brings:1 quality:1 indicated:1 bcis:2 effect:2 verify:1 true:9 barlow:1 former:1 equality:1 regularization:1 wi0:1 hence:2 analytically:1 wp:1 butera:1 illustrated:2 deal:1 adjacent:4 during:6 self:1 maintained:1 noted:3 criterion:1 generalized:1 complete:1 tt:4 performs:1 interface:1 passive:1 pro:1 consideration:1 instantaneous:1 recently:1 funding:1 nih:1 parikh:1 functional:2 spiking:6 discussed:1 extend:2 significant:1 feldman:1 unconstrained:1 session:2 c0t:9 inclusion:1 janelia:1 bartholomew:1 language:1 moving:1 access:1 stable:1 similarity:1 yk2:4 longer:2 etc:2 add:2 behaving:1 recent:4 optimizing:5 optimizes:1 driven:1 inf:2 manipulation:1 success:1 came:1 s11:1 watson:1 additional:1 relaxed:1 greater:2 employed:1 converge:1 paradigm:1 signal:10 dashed:1 vogelstein:2 violate:2 infer:3 stem:1 reduces:1 faster:6 pakman:2 cross:1 long:4 award:3 paired:1 va:1 impact:1 scalable:1 regression:6 basic:1 denominator:1 gcamp6s:1 poisson:4 arxiv:1 iteration:7 kernel:2 achieved:1 microscopy:2 cell:1 proposal:2 whereas:8 background:1 addition:1 addressed:1 interval:1 decreased:1 completes:1 rest:1 eliminates:1 pass:2 strict:1 recording:6 pooling:2 subject:3 supposing:1 hz:3 virtually:2 comment:1 integer:2 extracting:2 near:2 noting:2 split:2 enough:1 easy:1 decent:1 iterate:2 variety:1 fit:3 timesteps:1 lasso:1 opposite:1 inner:2 idea:1 reduce:1 translates:1 donoghue:1 shift:1 whether:2 penalty:3 render:1 paige:1 york:2 hardly:1 action:2 useful:2 latency:1 generally:1 involve:1 johannes:1 amount:1 locally:1 ten:1 induces:1 ashburn:1 generate:1 affords:1 millisecond:1 ahrens:3 governmental:1 neuroscience:1 estimated:3 delta:1 track:5 per:2 blue:6 hyperparameter:2 iz:3 nevertheless:2 verified:1 diffusion:1 n66001:1 imaging:29 timestep:1 ram:1 monotone:1 merely:4 sum:4 langer:1 enforced:1 run:9 volitional:1 powerful:1 i5:1 ca2:3 place:1 almost:1 zebrafish:6 missed:1 endorsement:1 capturing:1 bound:2 ct:29 followed:1 quadratic:2 encountered:2 nonnegative:11 activity:18 nontrivial:2 occur:1 handful:1 constraint:31 kronecker:1 speed:7 span:1 optical:3 department:2 according:4 turaga:1 smaller:2 across:1 increasingly:1 wi:18 partitioned:1 modification:1 asilomar:1 computationally:1 equation:12 count:1 needed:2 end:5 linderman:1 backtracks:1 observe:1 spectral:2 generic:1 enforce:1 assumes:2 running:6 cf:5 top:1 include:1 exploit:1 k1:3 build:1 especially:1 r01:1 sweep:3 move:10 objective:4 added:3 already:3 spike:41 occurs:1 strategy:1 concentration:3 rt:1 fluor:2 receptive:1 thank:2 simulated:6 ensuing:1 matsuki:1 collected:1 cellular:3 enforcing:1 induction:1 bremner:1 length:4 index:3 modeled:1 illustration:1 providing:2 minimizing:2 unfortunately:1 gk:1 trace:14 negative:1 rise:1 design:3 implementation:1 calcium:29 policy:1 perform:1 diamond:1 upper:1 neuron:14 convolution:1 observation:1 datasets:1 enabling:2 implied:1 descent:1 marshel:1 situation:2 extended:3 looking:1 frame:3 gc:2 perturbation:1 somatic:1 arbitrary:2 community:1 inferred:3 introduced:1 unpublished:1 required:2 gurobi:6 toolbox:1 friedrich:3 connection:1 framing:1 herein:1 temporary:1 barcelona:1 nip:1 address:1 poissonian:1 suggested:1 proceeds:2 below:4 usually:1 scott:1 regime:1 sparsity:8 genetically:1 program:1 saturation:1 including:3 max:2 video:2 green:2 memory:2 power:2 event:1 business:1 warm:11 rely:3 quantification:1 indicator:2 residual:3 advanced:1 hkc:1 t0i:1 improve:4 representing:1 temporally:1 conic:3 started:5 reprint:1 grewe:1 extract:2 columbia:3 coupled:1 prior:1 python:1 theis:1 determining:2 law:1 embedded:3 fully:1 expect:2 loss:1 interesting:1 yuste:1 fluorescent:2 merel:1 remarkable:1 localized:1 foundation:2 principle:1 suspected:1 share:1 pi:2 supported:1 last:2 free:1 offline:1 drastically:1 allow:1 template:1 sparse:5 benefit:1 ghz:1 boundary:1 neuroprosthetic:1 valid:1 pillow:1 autoregressive:4 conservatively:2 forward:13 author:1 adaptive:1 jump:2 far:1 approximate:4 implicitly:1 monotonicity:2 global:1 active:21 sequentially:2 kkt:1 reveals:1 conclude:1 assumed:2 tuples:1 physiologically:1 nature:2 learn:1 robust:2 inherently:1 sem:1 alg:4 necessarily:1 european:1 official:1 did:1 timescales:1 linearly:2 neurosci:3 whole:6 noise:12 hyperparameters:4 iarpa:2 neuronal:4 fig:14 intel:1 benchmarking:2 fashion:2 ny:1 wiley:1 precision:1 explicit:1 comput:1 candidate:2 crude:2 yasuda:1 ikegaya:1 decay:2 deconvolution:14 demixing:2 incorporating:1 adding:1 merging:3 sequential:2 magnitude:7 notwithstanding:1 nat:7 downward:1 illumination:1 biophys:1 demand:1 margin:1 chen:1 authorized:1 backtracking:4 distinguishable:1 paninski:5 cheaply:1 failed:1 conveniently:1 expressed:2 contained:2 scalar:1 springer:1 oasis:20 truth:4 violator:2 satisfies:1 extracted:2 ultrasensitive:1 reassuring:1 goal:1 ann:1 yti:3 change:3 hard:5 operates:1 denoising:2 miss:1 called:1 total:1 pas:1 sasaki:1 experimental:4 latter:3 pava:5 violated:3 avoiding:1 |
6,087 | 6,506 | NESTT: A Nonconvex Primal-Dual Splitting Method
for Distributed and Stochastic Optimization
Davood Hajinezhad, Mingyi Hong ?
Tuo Zhao?
Zhaoran Wang?
Abstract
We study a stochastic and distributed algorithm for nonconvex problems whose
objective consists of a sum of N nonconvex Li /N -smooth functions, plus a nonsmooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT)
algorithm splits the problem into N subproblems, and utilizes an augmented
Lagrangian based primal-dual scheme to solve it in a distributed and stochastic
manner. With a special non-uniformp
sampling, a version of NESTT achieves
PN
-stationary solution using O(( i=1 Li /N )2 /) gradient evaluations, which
can be up to O(N ) times better than the (proximal) gradient descent methods.
It also achieves Q-linear convergence rate for nonconvex `1 penalized quadratic
problems with polyhedral constraints. Further, we reveal a fundamental connection between primal-dual based methods and a few primal only methods such as
IAG/SAG/SAGA.
1
Introduction
Consider the following nonconvex and nonsmooth constrained optimization problem
N
1 X
gi (z) + g0 (z) + p(z),
min f (z) :=
z?Z
N i=1
(1.1)
where Z ? Rd ; for each i ? {0, ? ? ? , N }, gi : Rd ? R is a smooth possibly nonconvex function
which has Li -Lipschitz continuous gradient; p(z) : Rd ? R is a lower semi-continuous convex but
PN
possibly nonsmooth function. Define g(z) := N1 i=1 gi (z) for notational simplicity.
Problem (1.1) is quite general. It arises frequently in applications such as machine learning and signal processing; see a recent survey [7]. In particular, each smooth functions {gi }N
i=1 can represent:
1) a mini-batch of loss functions modeling data fidelity, such as the `2 loss, the logistic loss, etc;
2) nonconvex activation functions for neural networks, such as the logit or the tanh functions; 3)
nonconvex utility functions used in signal processing and resource allocation, see [4]. The smooth
function g0 can represent smooth nonconvex regularizers such as the non-quadratic penalties [2], or
the smooth part of the SCAD or MCP regularizers (which is a concave function) [26]. The convex
function p can take the following form: 1) nonsmooth convex regularizers such as `1 and `2 functions; 2) an indicator function for convex and closed feasible set Z, denoted as ?Z (?); 3) convex
functions without global Lipschitz continuous gradient, such as p(z) = z 4 or p(z) = 1/z + ?z?0 (z).
In this work we solve (1.1) in a stochastic and distributed manner. We consider the setting in which
N distributed agents each having the knowledge of one smooth function {gi }N
i=1 , and they are
connected to a cluster center which handles g0 and p. At any given time, a randomly selected agent
is activated and performs computation to optimize its local objective. Such distributed computation
model has been popular in large-scale machine learning and signal processing [6]. Such model
is also closely related to the (centralized) stochastic finite-sum optimization problem [1, 9, 14, 15,
?
Department of Industrial & Manufacturing Systems Engineering and Department of Electrical & Computer
Engineering, Iowa State University, Ames, IA, {dhaji,mingyi}@iastate.edu
?
School of Industrial and Systems Engineering,
Georgia Institute of Technology
tourzhao@gatech.edu
?
Department of Operations Research, Princeton University,zhaoran@princeton.edu
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
21, 22], in which each time the iterate is updated based on the gradient information of a random
component function. One of the key differences between these two problem types is that in the
distributed setting there can be disagreement between local copies of the optimization variable z,
while in the centralized setting only one copy of z is maintained.
Our Contributions. We propose a class of NonconvEx primal-dual SpliTTing (NESTT) algorithms
for problem (1.1). We split z ? Rd into local copies of xi ? Rd , while enforcing the equality
constraints xi = z for all i. That is, we consider the following reformulation of (1.1)
N
1 X
`(x, z) :=
gi (xi ) + g0 (z) + h(z), s.t. xi = z, i = 1, ? ? ? , N,
(1.2)
min
N i=1
x,z?Rd
where h(z) := ?Z (z) + p(z), x := [x1 ; ? ? ? ; xN ]. Our algorithm uses the Lagrangian relaxation of
the equality constraints, and at each iteration a (possibly non-uniformly) randomly selected primal
variable is optimized, followed by an approximate dual ascent step. Note that such splitting scheme
has been popular in the convex setting [6], but not so when the problem becomes nonconvex.
The NESTT is one of the first stochastic algorithms for distributed nonconvex nonsmooth optimization, with provable and nontrivial convergence rates. Our main contribution is given below. First,
in terms of some primal and dual optimality gaps, NESTT converges sublinearly to a point belongs
to stationary solution set of (1.2). Second, NESTT converges Q-linearly for certain nonconvex `1
penalized quadratic problems. To the best of our knowledge, this is the first time that linear convergence is established for stochastic and distributed optimization of such type of problems. Third, we
show that a gradient-based
NESTT with non-uniform sampling achieves an -stationary solution of
PN p
(1.1) using O(( i=1 Li /N )2 /) gradient evaluations. Compared with the classical gradient dePN
scent, which in the worst case requires O( i=1 Li /) gradient evaluation to achieve -stationarity,
our obtained rate can be up to O(N ) times better in the case where the Li ?s are not equal.
Our work also reveals a fundamental connection between primal-dual based algorithms and the
primal only average-gradient based algorithm such as SAGA/SAG/IAG [5, 9, 22]. With the key
observation that the dual variables in NESTT serve as the ?memory? of the past gradients, one can
specialize NESTT to SAGA/SAG/IAG. Therefore, NESTT naturally generalizes these algorithms to
the nonconvex nonsmooth setting. It is our hope that by bridging the primal-dual splitting algorithms
and primal-only algorithms (in both the convex and nonconvex setting), there can be significant
further research developments benefiting both algorithm classes.
Related Work. Many stochastic algorithms have been designed for (1.2) when it is convex. In these
algorithms the component functions gi ?s are randomly sampled and optimized. Popular algorithms
include the SAG/SAGA [9, 22], the SDCA [23], the SVRG [14], the RPDG [15] and so on. When the
problem becomes nonconvex, the well-known incremental based algorithm can be used [3, 24], but
these methods generally lack convergence rate guarantees. The SGD based method has been studied
in [10], with O(1/2 ) convergence rate. Recent works [1] and [21] develop algorithms based on
SVRG and SAGA for a special case of (1.1) where the entire problem is smooth and unconstrained.
To the best of our knowledge there has been no stochastic algorithms with provable, and non-trivial,
convergence rate guarantees for solving problem (1.1). On the other hand, distributed stochastic
algorithms for solving problem (1.1) in the nonconvex setting has been proposed in [13], in which
each time a randomly picked subset of agents update their local variables. However there has been
no convergence rate analysis for such distributed stochastic scheme. There has been some recent
distributed algorithms designed for (1.1) [17], but again without global convergence rate guarantee.
Preliminaries. The augmented Lagrangian function for problem (1.1) is given by:
L (x, z; ?) =
N
X
1
?i
gi (xi ) + h?i , xi ? zi + kxi ? zk2 + g0 (z) + h(z),
N
2
i=1
(1.3)
N
where ? := {?i }N
i=1 is the set of dual variables, and ? := {?i > 0}i=1 are penalty parameters.
We make the following assumptions about problem (1.1) and the function (1.3).
A-(a) The function f (z) is bounded from below over Z ? int(dom f ): f := minz?Z f (z) > ??.
p(z) is a convex lower semi-continuous function; Z is a closed convex set.
A-(b) The gi ?s and g have Lipschitz continuous gradients, i.e.,
k?g(y) ? ?g(z)k ? Lky ? zk, and k?gi (y) ? ?gi (z)k ? Li ky ? zk, ? y, z
2
Algorithm 1 NESTT-G Algorithm
1: for r = 1 to R do
2:
Pick ir ? {1, 2, ? ? ? , N } with probability pir and update (x, ?)
= arg min Vir (xir , z r , ?rir ) ;
xr+1
ir
(2.4)
? zr ;
= ?rir + ?ir ?ir xr+1
?r+1
ir
ir
(2.5)
xir
= ?rj ,
?r+1
j
Update z:
z
r+1
=
= zr ,
xr+1
j
? j 6= ir ;
}, z; ?r ).
arg min L({xr+1
i
z?Z
(2.6)
(2.7)
3: end for
4: Output: (z m , xm , ?m ) where m randomly picked from {1, 2, ? ? ? , R}.
PN
Clearly L ? 1/N i=1 Li , and the equality can be achieved in the worst case. For simPN
plicity of analysis we will further assume that L0 ? N1 i=1 Li .
PN
A-(c) Each ?i in (1.3) satisfies ?i > Li /N ; if g0 is nonconvex, then i=1 ?i > 3L0 .
Assumption A-(c) implies that L (x, z; ?) is strongly convex w.r.t. each xi and z, with modulus
PN
?i := ?i ? Li /N and ?z = i=1 ?i ? L0 , respectively [27, Theorem 2.1].
We then define the prox-gradient (pGRAD) for (1.1), which will serve as a measure of stationarity.
It can be checked that the pGRAD vanishes at the set of stationary solutions of (1.1) [20].
Definition 1.1. The proximal gradient of problem (1.1) is given by (for any ? > 0)
?
?
2
? ? (z) := ? z ? prox?
?f
p+?Z [z ? 1/??(g(z) + g0 (z))] , with proxp+?Z [u] := argmin p(u)+ kz?uk .
2
u?Z
2
The NESTT-G Algorithm
Algorithm Description. We present a primal-dual splitting scheme for the reformulated problem
(1.2). The algorithm is referred to as the NESTT with Gradient step (NESTT-G) since each agent
only requires to know the gradient of each component function. To proceed, let us define the following function (for some constants {?i > 0}N
i=1 ):
Vi (xi , z; ?i ) =
1
1
?i ?i
gi (z) + h?gi (z), xi ? zi + h?i , xi ? zi +
kxi ? zk2 .
N
N
2
Note that Vi (?) is related to L(?) in the following way: it is a quadratic approximation (approximated
at the point z) of L(x, y; ?) w.r.t. xi . The parameters ? := {?i }N
i=1 give some freedom to the
algorithm design, and they are critical in improving convergence rates as well as in establishing
connection between NESTT-G with a few primal only stochastic optimization schemes.
The algorithm proceeds as follows. Before each iteration begins the cluster center broadcasts z to
everyone. At iteration r + 1 a randomly selected agent ir ? {1, 2, ? ? ? N } is picked, who minimizes
Vir (?) w.r.t. its local variable xir , followed by a dual ascent step for ?ir . The rest of the agents
update their local variables by simply setting them to z. The cluster center then minimizes L(x, z; ?)
with respect to z. See Algorithm 1 for details. We remark that NESTT-G is related to the popular
ADMM method for convex optimization [6]. However our particular update schedule (randomly
picking (xi , ?i ) plus deterministic updating z), combined with the special x-step (minimizing an
approximation of L(?) evaluated at a different block variable z) is not known before. These features
are critical in our following rate analysis.
Convergence Analysis. To proceed, let us define r(j) as the last iteration in which the jth block
is picked before iteration r + 1. i.e. r(j) := max{t | t < r + 1, j = i(t)}. Define yjr := z r(j) if
j 6= ir , and yirr = z r . Define the filtration F r as the ?-field generated by {i(t)}r?1
t=1 .
A few important observations are in order. Combining the (x, z) updates (2.4) ? (2.7), we have
1
1
1
(?rq + ?gq (z r )),
?gq (z r ) + ?rq + ?q ?q (xr+1
? z r ) = 0, with q = ir (2.8a)
q
?q ?q
N
N
1
1
1
= ? ?gir (z r ), ?r+1
= ? ?gj (z r(j) ), ? j 6= ir , ? ?r+1
= ? ?gi (yir ), ? i
(2.8b)
j
i
N
N
N
1
1
(2.6) r (2.8b) r
= z = z ?
(?rj + ?gj (z r(j) )), ? j 6= ir .
(2.8c)
?j ?j
N
xr+1
= zr ?
q
?r+1
ir
xr+1
j
3
The key here is that the dual variables serve as the ?memory? for the past gradients of gi ?s. To
proceed, we first construct a potential function using an upper bound of L(x, y; ?). Note that
?j
1
1
? z r i + kxr+1
) + h?rj , xr+1
? z r k2 = gj (z r ), ? j 6= ir
gj (xr+1
j
j
N
2 j
N
?i
1
? z r i + kxr+1
? z r k2
gi (xr+1 ) + h?rir , xr+1
ir
N r ir
2 ir
(i) 1
?i + Lir /N r+1
?
gi (z r ) + r
kxir ? z r k2
N r
2
(ii) 1
?i + Lir /N
=
gi (z r ) + r
) ? ?gir (z r ))k2
k1/N (?gir (yir?1
r
N r
2(?ir ?ir )2
(2.9)
(2.10)
where (i) uses (2.8b) and applies the descent lemma on the function 1/N gi (?); in (ii) we have used
(2.5) and (2.8b). Since each i is picked with probability pi , we have
Eir [L(xr+1 , z r ; ?r ) | F r ]
?
N
N
X
X
1
pi (?i + Li /N )
k1/N (?gi (yir?1 ) ? ?gi (z r ))k2 + g0 (z r ) + h(z r )
gi (z r ) +
2
N
2(?
?
)
i
i
i=1
i=1
?
N
N
X
X
3pi ?i
1
gi (z r ) +
k1/N (?gi (yir?1 ) ? ?gi (z r ))k2 + g0 (z r ) + h(z r ) := Qr ,
2
N
(?
?
)
i
i
i=1
i=1
where in the last inequality we have used Assumption [A-(c)]. In the following, we will use EF r [Qr ]
as the potential function, and show that it decreases at each iteration.
Lemma 2.1. Suppose Assumption A holds, and pick
1
?i = pi = ??i , where ? := PN
i=1
?i
,
and
?i ?
9Li
,
N pi
i = 1, ? ? ? N.
(2.11)
Then the following descent estimate holds true for NESTT-G
E[Qr ? Qr?1 |F r?1 ] ? ?
PN
i=1
8
?i
Ezr kz r ? z r?1 k2 ?
N
X
1 1
k (?gi (z r?1 ) ? ?gi (yir?2 ))k2 . (2.12)
2?
i N
i=1
Sublinear Convergence. Define the optimality gap as the following:
h
i
h
i
? 1/? f (z r )k2 = 1 E kz r ? prox1/? [z r ? ??(g(z r ) + g0 (z r ))]k2 .
E[Gr ] := E k?
h
2
?
Note that when h, g0 ? 0, E[Gr ] reduces to E[k?g(z r )k2 ]. We have the following result.
Theorem 2.1. Suppose Assumption A holds, and pick (for i = 1, ? ? ? , N )
!
p
N p
X
p
Li /N
1
?i = pi = PN p
, ?i = 3
Li /N
Li /N , ? = PN p
.
Li /N
3( i=1 Li /N )2
i=1
i=1
(2.13)
(2.14)
Then every limit point generated by NESTT-G is a stationary solution of problem (1.2). Further,
N
2 E[Q1 ? QR+1 ]
80 X p
Li /N
;
3 i=1
R
!2
"N
#
N
X 2
m
E[Q1 ? QR+1 ]
80 X p
m
m?1
2
2) E[G ] + E
3?i xi ? z
?
Li /N
.
3 i=1
R
i=1
1) E[Gm ] ?
Note that Part (1) is useful in the centralized finite-sum minimization setting, as it shows the sublinear convergence of NESTT-G, measured only by the primal optimality gap evaluated at z r . Meanwhile, part (2) is useful in the distributed setting, as it also shows that the expected constraint violation, which measures the consensus among agents, shrinks in the same order. We also comment
that the above result suggests
that to achieve an -stationary solution, the NESTT-G requires about
2 !
p
PN
O
Li /N / number of gradient evaluations (for simplicity we have ignored an adi=1
ditive N factor for evaluating the gradient of the entire function at the initial step of the algorithm).
4
Algorithm 2 NESTT-E Algorithm
1: for r = 1 to R do
2:
Update z by minimizing the augmented Lagrangian:
z r+1 = arg min L(xr , z; ?r ).
(3.15)
z
Randomly pick ir ? {1, 2, ? ? ? N } with probability pir :
3:
= argmin Uir (xir , z r+1 ; ?rir );
xr+1
ir
(3.16)
xir
?z
= ?rir + ?ir ?ir xr+1
?r+1
ir
ir
xr+1
j
=
xrj ,
?r+1
j
=
?rj
r+1
;
(3.17)
? j 6= ir .
(3.18)
4: end for
5: Output: (z m , xm , ?m ) where m randomly picked from {1, 2, ? ? ? , R}.
It is interesting to observe that our choice of pi is proportional to the square root of the Lipschitz
constant of each component function, rather than to Li . Because of such choice of the sampling
probability, the derived convergence rate has a mild dependency on N and Li ?s. Compared with the
conventional gradient-based methods, our scaling can be up to N times better. Detailed discussion
and comparison will be given in Section 4.
Note that similar sublinear convergence rates can be obtained for the case ?i = 1 for all i (with
different scaling constants). However due to space limitation, we will not present those results here.
Linear Convergence. In this section we show that the NESTT-G is capable of linear convergence
for a family of nonconvex quadratic problems, which has important applications, for example in
high-dimensional statistical learning [16]. To proceed, we will assume the following.
B-(a) Each function gi (z) is a quadratic function of the form gi (z) = 1/2z T Ai z + hb, zi, where
Ai is a symmetric matrix but not necessarily positive semidefinite;
B-(b) The feasible set Z is a closed compact polyhedral set;
B-(c) The nonsmooth function p(z) = ?kzk1 , for some ? ? 0.
Our linear convergence result is based upon certain error bound condition around the stationary
solutions set, which has been shown in [18] for smooth quadratic problems and has been extended
to including `1 penalty in [25, Theorem 4]. Due to space limitation the statement of the condition
will be given in the supplemental material, along with the proof of the following result.
Theorem 2.2. Suppose that Assumptions A, B are satisfied. Then the sequence {E[Qr+1 ]}?
r=1
converges Q-linearly 4 to some Q? = f (z ? ), where z ? is a stationary solution for problem (1.1).
That is, there exists a finite r? > 0, ? ? (0, 1) such that for all r ? r?, E[Qr+1 ? Q? ]? ?E[Qr ? Q? ].
Linear convergence of this type for problems satisfying Assumption B has been shown for (deterministic) proximal gradient based methods [25, Theorem 2, 3]. To the best of our knowledge, this is
the first result that shows the same linear convergence for a stochastic and distributed algorithm.
3
The NESTT-E Algorithm
Algorithm Description. In this section, we present a variant of NESTT-G, which is named NESTT
with Exact minimization (NESTT-E). Our motivation is the following. First, in NESTT-G every
agent should update its local variable at every iteration [cf. (2.4) or (2.6)]. In practice this may not
be possible, for example at any given time a few agents can be in the sleeping mode so they cannot
perform (2.6). Second, in the distributed setting it has been generally observed (e.g., see [8, Section
V]) that performing exact minimization (whenever possible) instead of taking the gradient steps for
local problems can significantly speed up the algorithm. The NESTT-E algorithm to be presented in
this section is designed to address these issues. To proceed, let us define a new functionas follows:
U (x, z; ?) :=
N
X
i=1
Ui (xi , z; ?i ) :=
N
X
i=1
1
?i ?i
gi (xi ) + h?i , xi ? zi +
kxi ? zk2 .
N
2
A sequence {xr } is said to converge Q-linearly to some x
? if lim supr kxr+1 ? x
?k/kxr ? x
?k ? ?, where
? ? (0, 1) is some constant; cf [25] and references therein.
4
5
Note that if ?i = 1 for all i, then the L(x, z; ?) = U (x, z; ?) + p(z) + h(z). The algorithm details
are presented in Algorithm 2.
Convergence Analysis. We begin analyzing NESTT-E. The proof technique is quite different from
that for NESTT-G, and it is based upon using the expected value of the Augmented Lagrangian function as the potential function; see [11, 12, 13]. For the ease of description we define the following
quantities:
1
? := PN
w := (x, z, ?),
i=1 ?i
,
ci :=
L2i
1 ? ?i Li
?i
+
,
?
?i ?i N 2
2
?i N
? := {?i }N
i=1 .
To measure the optimality of NESTT-E, define the prox-gradient of L(x, z; ?) as:
?
?L(w) = (z ? proxh [z ? ?z (L(w) ? h(z))]); ?x1 L(w); ? ? ? ; ?xN L(w) ? R(N +1)d . (3.19)
2
?
We define the optimality gap by adding to k?L(w)k
the size of the constraint violation [13]:
r 2
?
H(wr ) := k?L(w
)k +
N
X
L2i
kxri ? z r k2 .
N2
i=1
It can be verified that H(wr ) ? 0 implies that wr reaches a stationary solution for problem (1.2).
We have the following theorem regarding the convergence properties of NESTT-E.
Theorem 3.1. Suppose Assumption A holds, and that (?i , ?i ) are chosen such that ci < 0 . Then
for some constant f , we have
E[L(wr )] ? E[L(wr+1 )] ? f > ??,
? r ? 0.
Further, almost surely every limit point of {wr } is a stationary solution of problem (1.2). Finally,
for some function of ? denoted as C(?) = ?1 (?)/?2 (?), we have the following:
E[H(wm )] ?
C(?)E[L(w1 ) ? L(wR+1 )]
,
R
(3.20)
where ?1 := max(?
?1 (?), ?
?1 ) and ?2 := max(?
?2 (?), ?
?2 ), and these constants are given by
(
?
?1 (?) = max 4
i
4?i2 + (2 +
N
X
1
?1
?i
2
L2i
N2
!
+3
N
X
L2i
,
N2
i=1
i=1
i=1
?i
L2
1 ? ?i Li
? 2 i
?
,
?
?2 (?) = max pi
i
2
N ?i ?i
?i N
?
?1 =
N
X
L2i
+ ?i2 +
N2
L4i
L2i
+
?i ?i2 N 4
N2
)
,
?i + L0 )2 + 3
PN
?
?2 =
i=1
?i ? L0
.
2
We remark that the above result shows the sublinear convergence of NESTT-E to the set of stationary
solutions. Note that ?i = ?i ? Li /N , to satisfy ci < 0, a simple derivation yields
p
Li (2 ? ?i ) + (?i ? 2)2 + 8?i
?i >
.
2N ?i
Further, the above result characterizes the dependency of the rates on various parameters of the
algorithm. For example, to see the effect of ? on the convergence rate, let us set pi = PNLi L ,
i=1
i
and ?i = 3Li /N , and assume L0 = 0, then consider two different choices of ?: ?
bi = 1, ? i and
?
ei = 4, ? i. One can easily check that applying these different choices leads to following results:
C(b
?) = 49
N
X
Li /N,
C(e
?) = 28
i=1
N
X
Li /N.
i=1
The key observation is that increasing ?i ?s reduces the constant in front of the rate. Hence, we
expect that in practice larger ?i ?s will yield faster convergence.
4
Connections and Comparisons with Existing Works
In this section we compare NESTT-G/E with a few existing algorithms in the literature. First, we
present a somewhat surprising observation, that NESTT-G takes the same form as some well-known
algorithms for convex finite-sum problems. To formally state such relation, we show in the following
result that NESTT-G in fact admits a compact primal-only characterization.
6
Table 1: Comparison of # of gradient evaluations for NESTT-G and GD in the worst case
# of Gradient Evaluations
Case I: Li =?1, ?i
Case II : O( N ) terms with Li = N
the rest with Li = 1
Case III : O(1) terms with Li = N 2
the rest with Li = 1
PNESTT-G
p
O ( N
Li /N )2 /
i=1
O(N/)
O
PGD
Li /
O(N/)
N
i=1
O(N/)
O(N 3/2 /)
O(N/)
O(N 2 /)
Proposition 4.1. The NESTT-G can be written into the following compact form:
z r+1 = arg min h(z) + g0 (z) +
z
with
ur+1 := z r ? ?
1
kz ? ur+1 k2
2?
N
1 X
1
))
+
(?gir (z r ) ? ?gir (yir?1
?gi (yir?1 ) .
r
N ?ir
N i=1
(4.21a)
(4.21b)
Based on this observation, the following comments are in order.
(1) Suppose h ? 0, g0 ? 0 and ?i = 1, pi = 1/N for all i. Then (4.21) takes the same form as the
SAG presented in [22]. Further, when the component functions gi ?s are picked cyclically in a
Gauss-Seidel manner, the iteration (4.21) takes the same form as the IAG algorithm [5].
(2) Suppose h 6= 0 and g0 6= 0, and ?i = pi = 1/N for all i. Then (4.21) is the same as the SAGA
algorithm [9], which is design for optimizing convex nonsmooth finite sum problems.
Note that SAG/SAGA/IAG are all designed for convex problems. Through the lens of primal-dual
splitting, our work shows that they can be generalized to nonconvex nonsmooth problems as well.
Secondly, NESTT-E is related to the proximal version of the nonconvex ADMM [13, Algorithm 2].
However, the introduction of ?i ?s is new, which can significantly improve the practical performance
but complicates the analysis. Further, there has been no counterpart of the sublinear and linear
convergence rate analysis for the stochastic version of [13, Algorithm 2].
Thirdly, we note that a recent paper [21] has shown that SAGA works for smooth and unconstrained
nonconvex problem. Suppose that h ? 0, g0 6= 0, Li = Lj , ? i, j and ?i = pi = 1/N , the auPN
thors show that SAGA achieves -stationarity using O(N 2/3 ( i=1 Li /N )/) gradient evaluations.
PN
Compared with GD, which achieves -stationarity using O( i=1 Li /) gradient evaluations in the
PN
worse case (in the sense that i=1 Li /N = L), the rate in [21] is O(N 1/3 ) times better. However, the algorithm in [21] is different from NESTT-G in two aspects: 1) it does not generalize to
the nonsmooth constrained problem (1.1); 2) it samples two component functions at each iteration,
while NESTT-G only samples once. Further, the analysis and the scaling are derived for the case of
uniform Li ?s, therefore it is not clear how the algorithm and the rates can be adapted for the nonuniform case. On the other hand, our NESTT works for the general nonsmooth constrained setting.
The non-uniform sampling used in NESTT-G is well-suited for problems with non-uniform Li ?s,
and our scaling can be up to N times better than GD (or its proximal version) in the worst case.
Note that problems with non-uniform Li ?s for the component functions are common in applications
such as sparse optimization and signal processing. For example in LASSO problem the data matrix
is often normalized by feature (or ?column-normalized? [19]), therefore the `2 norm of each row of
the data matrix (which corresponds to the Lipschitz constant for each component function) can be
dramatically different.
In Table 1 we list the comparison of the number of gradient evaluations for NESTT-G and GD, in
PN
the worst case (in the sense that i=1 Li /N = L). For simplicity, we omitted an additive constant
of O(N ) for computing the initial gradients.
5
Numerical Results
In this section we evaluate the performance of NESTT. Consider the high dimensional regression
problem with noisy observation [16], where M observations are generated by y = X? + . Here
y ? RM is the observed data sample; X ? RM ?P is the covariate matrix; ? ? RP is the ground
truth, and ? RM is the noise. Suppose that the covariate matrix is not perfectly known, i.e., we
observe A = X + W where W ? RM ?P is the noise matrix with known covariance matrix ?W .
? := 1/M (A> A) ? ?W , and ?? := 1/M (A> y). To estimate the ground truth ?, let
Let us define ?
7
Uniform Sampling
10 5
10 0
Optimality gap
Optimality gap
10 0
10 -5
SGD
NESTT-E(
NESTT-E(
NESTT-G
SAGA
10 -10
10 -15
Non-Uniform Sampling
10 5
0
100
200
300
= 10)
= 1)
400
10 -5
10 -10
10
10 -20
500
SGD
NESTT-E(
NESTT-E(
NESTT-G
SAGA
-15
0
100
# Grad/N
200
300
= 10)
= 1)
400
500
# Grad/N
Figure 1: Comparison of NESTT-G/E, SAGA, SGD on problem (5.22). The x-axis denotes the?number of
passes of the dataset. Left: Uniform Sampling pi = 1/N ; Right: Non-uniform Sampling (pi =
Li /N
PN
i=1
?
Li /N
).
? 1/? f (z r )k2 for different algorithms, with 100 passes of the datasets.
Table 2: Optimality gap k?
N
10
20
30
40
50
SGD
Uniform Non-Uni
3.4054
0.2265
0.6370
6.9087
0.2260
0.1639
0.0574
0.3193
0.0154
0.0409
NESTT-E (? = 10)
Uniform Non-Uni
2.6E-16
6.16E-19
2.4E-9
5.9E-9
3.2E-6
2.7E-6
5.8E-4
8.1E-5
8.3E.-4
7.1E-4
NESTT-G
Uniform Non-Uni
2.3E-21
6.1E-24
1.2E-10
2.9E-11
4.5E-7
1.4E-7
1.8E-5
3.1E-5
1.2E-4
2.7E-4
SAGA
Uniform Non-Uni
2.7E-17
2.8022
7.7E-7
11.3435
2.5E-5
0.1253
4.1E-5
0.7385
2.5E-4
3.3187
us consider the following (nonconvex) optimization problem posed in [16, problem (2.4)] (where
R > 0 controls sparsity):
? ? ?? z
min z > ?z
s.t. kzk1 ? R.
z
(5.22)
? is not positive semidefinite hence the problem is not convex. Note
Due to the existence of noise, ?
that this problem satisfies Assumption A? B, then by Theorem 2.2 NESTT-G converges Q-linearly.
To test the performance of the proposed algorithm,P
we generate the problem following similar setups
as [16]. Let X = (X1 ; ? ? ? , XN ) ? RM ?P with i Ni = M and each Xi ? RNi ?P corresponds
to Ni data points, and it is generated from i.i.d Gaussian. Here Ni represents the size of each minibatch of samples. Generate the observations yi = Xi ?? ? +i ? RNi , where ? ? is a K-sparse vector
to be estimated, and i ? RNi is the random noise. Let W = [W1 ; ? ? ? ; WN ], with Wi ? RNi ?P
? = 1 PN N z > X > Xi ? W > Wi z.
generated with i.i.d Gaussian. Therefore we have z > ?z
i
i
i=1
N
M
?
We set M = 100, 000, P = 5000, N = 50, K = 22 ? P ,and R = k? ? k1 . We implement
NESTT-G/E, the SGD, and the nonconvex SAGA proposed in [21] with stepsize ? = 3Lmax1N 2/3
(with Lmax := maxi Li ). Note that the SAGA proposed in [21] only works for the unconstrained
problems with uniform Li , therefore when applied to (5.22) it is not guaranteed to converge. Here
we only include it for comparison purposes.
? 1/? f (z r )k2 . In the left figure
In Fig. 1 we compare different algorithms in terms of the gap k?
we consider the problem with Ni = Nj for all i, j, and we show performance of the proposed
algorithms with uniform sampling (i.e., the probability of picking ith block is pi = 1/N ). On the
right one we consider problems in which approximately half of the component
have twice
p functions
PN p
the size of Li ?s as the rest, and consider the non-uniform sampling (pi = Li /N / i=1 Li /N ).
Clearly in both cases the proposed algorithms perform quite well. Furthermore, it is clear that the
NESTT-E performs well with large ? := {?i }N
i=1 , which confirms our theoretical rate analysis. Also
it is worth mentioning that when the Ni ?s are non-uniform, the proposed algorithms [NESTT-G and
NESTT-E (with ? = 10)] significantly outperform SAGA and SGD. In Table 2 we further compare
different algorithms when changing the number of component functions (i.e., the number of minibatches N ) while the rest of the setup is as above. We run each algorithm with 100 passes over
the dataset. Similarly as before, our algorithms perform well, while SAGA seems to be sensitive to
the uniformity of the size of the mini-batch [note that there is no convergence guarantee for SAGA
applied to the nonconvex constrained problem (5.22)].
8
References
[1] Z. A.-Zhu and E. Hazan. Variance reduction for faster non-convex optimization. 2016.
Preprint, available on arXiv, arXiv:1603.05643.
[2] A. Antoniadis, I. Gijbels, and M. Nikolova. Penalized likelihood regression for generalized
linear models with non-quadratic penalties. Annals of the Institute of Statistical Mathematics,
63(3):585?615, 2009.
[3] D. Bertsekas. Incremental gradient, subgradient, and proximal methods f or convex optimization: A survey. 2000. LIDS Report 2848.
[4] E. Bjornson and E. Jorswieck. Optimal resource allocation in coordinated multi-cell systems.
Foundations and Trends in Communications and Information Theory, 9, 2013.
[5] D. Blatt, A. O. Hero, and H. Gauchman. A convergent incremental gradient method with a
constant step size. SIAM Journal on Optimization, 18(1):29?51, 2007.
[6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in
Machine Learning, 3(1):1?122, 2011.
[7] V. Cevher, S. Becker, and M. Schmidt. Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Processing Magazine,
31(5):32?43, Sept 2014.
[8] T.-H. Chang, M. Hong, and X. Wang. Multi-agent distributed optimization via inexact consensus admm. IEEE Transactions on Signal Processing, 63(2):482?497, Jan 2015.
[9] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with
support for non-strongly convex composite objectives. In The Proceeding of NIPS, 2014.
[10] S. Ghadimi and G. Lan. Stochastic first- and zeroth-order methods for nonconvx stochastic
programming. SIAM Journal on Optimizatnoi, 23(4):2341?2368, 2013.
[11] D. Hajinezhad, T. H. Chang, X. Wang, Q. Shi, and M. Hong. Nonnegative matrix factorization
using admm: Algorithm and convergence analysis. In 2016 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), pages 4742?4746, March 2016.
[12] D. Hajinezhad and M. Hong. Nonconvex alternating direction method of multipliers for distributed sparse principal component analysis. In the Proceedings of GlobalSIPT, 2015.
[13] M. Hong, Z.-Q. Luo, and M. Razaviyayn. Convergence analysis of alternating direction
method of multipliers for a family of nonconvex problems. SIAM Journal On Optimization,
26(1):337?364, 2016.
[14] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In the Proceedings of the Neural Information Processing (NIPS). 2013.
[15] G. Lan. An optimal randomized incremental gradient method. 2015. Preprint.
[16] P.-L. Loh and M. Wainwright. High-dimensional regression with noisy and missing data:
Provable guarantees with nonconvexity. The Annals of Statistics, 40(3):1637?1664, 2012.
[17] P. D. Lorenzo and G. Scutari. Next: In-network nonconvex optimization. 2016. Preprint.
[18] Z.-Q. Luo and P. Tseng. On the linear convergence of descent methods for convex essentially
smooth minimization. SIAM Journal on Control and Optimization, 30(2):408?425, 1992.
[19] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. Statist. Sci., 27(4):538?
557, 11 2012.
[20] M. Razaviyayn, M. Hong, Z.-Q. Luo, and J. S. Pang. Parallel successive convex approximation
for nonsmooth nonconvex optimization. In the Proceedings of NIPS, 2014.
[21] S. J. Reddi, S. Sra, B. Poczos, and A. Smola. Fast incremental method for nonconvex optimization. 2016. Preprint, available on arXiv: arXiv:1603.06159.
[22] M. Schmidt, N. L. Roux, and F. Bach. Minimizing finite sums with the stochastic average
gradient. 2013. Technical report, INRIA.
[23] S. Shalev-Shwartz and T. Zhang. Proximal stochastic dual coordinate ascent methods for
regularzied loss minimization. Journal of Machine Learning Rsearch, 14:567?599, 2013.
[24] S. Sra. Scalable nonconvex inexact proximal splitting. In Advances in Neural Information
Processing Systems (NIPS), 2012.
[25] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117:387?423, 2009.
[26] Z. Wang, H. Liu, and T. Zhang. Optimal computational and statistical rates of convergence for
sparse nonconvex learning problems. Annals of Statistics, 42(6):2164?2201, 2014.
[27] S. Zlobec. On the Liu - Floudas convexification of smooth programs. Journal of Global
Optimization, 32:401 ? 407, 2005.
9
| 6506 |@word mild:1 version:4 norm:1 seems:1 logit:1 confirms:1 covariance:1 q1:2 pick:4 sgd:7 reduction:2 initial:2 liu:2 past:2 existing:2 surprising:1 luo:3 activation:1 chu:1 hajinezhad:3 written:1 additive:1 numerical:1 designed:4 update:8 stationary:11 lky:1 selected:3 half:1 antoniadis:1 ith:1 characterization:1 ames:1 successive:1 zhang:3 mathematical:1 along:1 consists:1 specialize:1 scent:1 polyhedral:2 manner:3 expected:2 sublinearly:1 frequently:1 multi:2 increasing:1 becomes:2 spain:1 begin:2 bounded:1 gir:5 argmin:2 minimizes:2 supplemental:1 unified:1 nj:1 guarantee:5 every:4 concave:1 sag:6 k2:15 rm:5 vir:2 uk:1 control:2 bertsekas:1 before:4 positive:2 engineering:3 local:8 limit:2 analyzing:1 establishing:1 approximately:1 inria:1 plus:2 twice:1 therein:1 studied:1 zeroth:1 suggests:1 ease:1 iag:5 mentioning:1 analytics:1 bi:1 factorization:1 practical:1 practice:2 block:3 implement:1 xr:17 jan:1 sdca:1 floudas:1 significantly:3 composite:1 boyd:1 cannot:1 applying:1 optimize:1 conventional:1 deterministic:2 lagrangian:5 center:3 ghadimi:1 shi:1 missing:1 convex:22 survey:2 simplicity:3 splitting:8 decomposable:1 roux:1 estimator:1 handle:1 coordinate:2 updated:1 annals:3 suppose:8 gm:1 magazine:1 exact:2 programming:2 us:2 trend:2 approximated:1 satisfying:1 updating:1 convexification:1 observed:2 eir:1 preprint:4 wang:4 electrical:1 worst:5 connected:1 decrease:1 rq:2 vanishes:1 ui:1 dom:1 uniformity:1 solving:2 predictive:1 serve:3 upon:2 easily:1 icassp:1 various:1 regularizer:1 derivation:1 fast:2 shalev:1 whose:1 quite:3 larger:1 solve:2 posed:1 kxri:1 statistic:2 gi:32 noisy:2 sequence:2 propose:1 gq:2 combining:1 achieve:2 benefiting:1 description:3 ky:1 qr:9 convergence:30 cluster:3 incremental:6 converges:4 develop:1 measured:1 school:1 implies:2 direction:3 closely:1 stochastic:19 material:1 preliminary:1 proposition:1 secondly:1 hold:4 around:1 ground:2 achieves:5 omitted:1 purpose:1 tanh:1 sensitive:1 hope:1 minimization:6 clearly:2 gaussian:2 rather:1 pn:19 gatech:1 xir:5 l0:6 derived:2 notational:1 check:1 likelihood:1 industrial:2 sense:2 entire:2 lj:1 relation:1 arg:4 dual:16 fidelity:1 among:1 denoted:2 issue:1 development:1 constrained:4 special:3 equal:1 field:1 construct:1 having:1 once:1 sampling:10 represents:1 yu:1 nonsmooth:13 report:2 few:5 randomly:9 n1:2 freedom:1 stationarity:4 centralized:3 evaluation:9 violation:2 semidefinite:2 primal:17 activated:1 regularizers:4 capable:1 supr:1 theoretical:1 complicates:1 cevher:1 column:1 modeling:1 subset:1 uniform:17 johnson:1 gr:2 front:1 dependency:2 proximal:8 kxi:3 combined:1 gd:4 fundamental:2 siam:4 randomized:2 international:1 negahban:1 picking:2 w1:2 again:1 satisfied:1 broadcast:1 possibly:3 worse:1 zhao:1 li:52 potential:3 prox:3 zhaoran:2 int:1 satisfy:1 coordinated:1 vi:2 root:1 picked:7 closed:3 hazan:1 characterizes:1 wm:1 parallel:2 blatt:1 contribution:2 pang:1 ir:28 square:1 ni:5 variance:2 who:1 yield:2 generalize:1 worth:1 reach:1 whenever:1 checked:1 definition:1 inexact:2 naturally:1 proof:2 sampled:1 dataset:2 popular:4 knowledge:4 lim:1 schedule:1 proxh:1 evaluated:2 shrink:1 strongly:2 furthermore:1 smola:1 hand:2 ei:1 lack:1 minibatch:1 logistic:1 mode:1 reveal:1 modulus:1 effect:1 normalized:2 true:1 multiplier:3 counterpart:1 equality:3 hence:2 alternating:3 symmetric:1 i2:3 maintained:1 hong:6 generalized:2 yun:1 performs:2 ef:1 parikh:1 common:1 scutari:1 thirdly:1 significant:1 ai:2 rd:6 unconstrained:3 mathematics:1 similarly:1 gj:4 etc:1 recent:4 optimizing:1 belongs:1 certain:2 nonconvex:33 inequality:1 yi:1 somewhat:1 surely:1 converge:2 signal:7 semi:2 ii:3 rj:4 reduces:2 seidel:1 smooth:12 technical:1 faster:2 bach:2 ravikumar:1 variant:1 regression:3 scalable:2 essentially:1 arxiv:4 iteration:9 represent:2 achieved:1 sleeping:1 cell:1 rest:5 ascent:3 comment:2 pass:3 reddi:1 split:2 iii:1 wn:1 hb:1 iterate:1 nikolova:1 zi:5 lasso:1 perfectly:1 regarding:1 grad:2 pir:2 utility:1 bridging:1 defazio:1 becker:1 accelerating:1 penalty:4 loh:1 reformulated:1 speech:1 proceed:5 poczos:1 remark:2 ignored:1 generally:2 useful:2 detailed:1 clear:2 dramatically:1 statist:1 generate:2 outperform:1 estimated:1 wr:7 key:4 reformulation:1 lan:2 changing:1 verified:1 lacoste:1 nonconvexity:1 relaxation:1 subgradient:1 sum:6 gijbels:1 run:1 named:1 family:2 l2i:6 almost:1 utilizes:1 scaling:4 bound:2 followed:2 guaranteed:1 convergent:1 quadratic:8 gauchman:1 nonnegative:1 nontrivial:1 adapted:1 constraint:5 aspect:1 speed:1 min:7 optimality:8 kxr:4 performing:1 uir:1 separable:1 department:3 scad:1 march:1 iastate:1 ur:2 wi:2 lid:1 plicity:1 resource:2 mcp:1 know:1 hero:1 zk2:3 end:2 generalizes:1 operation:1 available:2 observe:2 disagreement:1 stepsize:1 batch:2 schmidt:2 rp:1 existence:1 denotes:1 include:2 cf:2 k1:4 classical:1 objective:3 g0:15 quantity:1 said:1 gradient:36 sci:1 consensus:2 trivial:1 tseng:2 enforcing:1 provable:3 mini:2 minimizing:3 setup:2 yir:7 statement:1 pgrad:2 subproblems:1 kzk1:2 filtration:1 design:2 rir:5 perform:3 upper:1 observation:8 datasets:1 finite:6 descent:6 extended:1 communication:1 nonuniform:1 peleato:1 tuo:1 eckstein:1 connection:4 optimized:2 acoustic:1 established:1 barcelona:1 nip:5 address:1 proceeds:1 below:2 xm:2 sparsity:1 program:1 max:5 memory:2 including:1 everyone:1 wainwright:2 ia:1 critical:2 indicator:1 zr:3 yjr:1 zhu:1 scheme:5 improve:1 technology:1 lorenzo:1 julien:1 axis:1 sept:1 literature:1 l2:1 loss:4 expect:1 sublinear:5 interesting:1 limitation:2 allocation:2 proportional:1 foundation:2 iowa:1 agent:10 rni:4 pi:16 row:1 penalized:3 lmax:1 last:2 copy:3 svrg:2 jth:1 institute:2 taking:1 sparse:4 distributed:18 xn:3 evaluating:1 kz:4 transaction:1 approximate:1 compact:3 uni:4 lir:2 thor:1 global:3 reveals:1 xi:19 shwartz:1 continuous:5 table:4 zk:2 sra:2 improving:1 adi:1 necessarily:1 meanwhile:1 main:1 linearly:4 motivation:1 noise:4 big:2 n2:5 razaviyayn:2 x1:3 augmented:4 fig:1 referred:1 georgia:1 saga:19 third:1 minz:1 cyclically:1 theorem:8 kxir:1 covariate:2 maxi:1 list:1 admits:1 exists:1 adding:1 ci:3 gap:8 suited:1 simply:1 chang:2 applies:1 corresponds:2 truth:2 mingyi:2 satisfies:2 minibatches:1 manufacturing:1 lipschitz:5 admm:4 feasible:2 pgd:1 uniformly:1 lemma:2 principal:1 lens:1 gauss:1 xrj:1 formally:1 highdimensional:1 support:1 arises:1 evaluate:1 princeton:2 |
6,088 | 6,507 | LazySVD: Even Faster SVD Decomposition
Yet Without Agonizing Pain?
Zeyuan Allen-Zhu
zeyuan@csail.mit.edu
Institute for Advanced Study
& Princeton University
Yuanzhi Li
yuanzhil@cs.princeton.edu
Princeton University
Abstract
We study k-SVD that is to obtain the first k singular vectors of a matrix A.
Recently, a few breakthroughs have been discovered on k-SVD: Musco and
Musco [19] proved the first gap-free convergence result using the block Krylov
method, Shamir [21] discovered the first variance-reduction stochastic method, and
Bhojanapalli et al. [7] provided the fastest O(nnz(A) + poly(1/?))-time algorithm
using alternating minimization.
In this paper, we put forward a new and simple LazySVD framework to improve
the above breakthroughs. This framework leads to a faster gap-free method outperforming [19], and the first accelerated and stochastic method outperforming [21].
In the O(nnz(A) + poly(1/?)) running-time regime, LazySVD outperforms [7] in
certain parameter regimes without even using alternating minimization.
1
Introduction
The singular value decomposition (SVD) of a rank-r matrix A ? Rd?n corresponds to decomposing
A = V ?U > where V ? Rd?r , U ? Rn?r are two column orthonormal matrices, and ? =
diag{?1 , . . . , ?r } ? Rr?r is a non-negative diagonal matrix with ?1 ? ?2 ? ? ? ? ? ?r ? 0. The
columns of V (resp. U ) are called the left (resp. right) singular vectors of A and the diagonal entries
of ? are called the singular values of A. SVD is one of the most fundamental tools used in machine
learning, computer vision, statistics, and operations research, and is essentially equivalent to principal
component analysis (PCA) up to column averaging.
A rank k partial SVD, or k-SVD for short, is to find the top k left singular vectors of A, or equivalently,
the first k columns of V . Denoting by Vk ? Rd?k the first k columns of V , and Uk the first k
columns of U , one can define A?k := Vk Vk> A = Vk ?k Uk> where ?k = diag{?1 , . . . , ?k }. Under
this notation, A?k is the the best rank-k approximation of matrix A in terms of minimizing kA ? Ak k
among all rank k matrices Ak . Here, the norm can be any Schatten-q norm for q ? [1, ?], including
spectral norm (q = ?) and Frobenius norm (q = 2), therefore making k-SVD a very powerful tool
for information retrieval, data de-noising, or even data compression.
Traditional algorithms to compute SVD essentially run in time O(nd min{d, n}), which is usually
very expensive for big-data scenarios. As for k-SVD, defining gap := (?k ? ?k+1 )/(?k ) to be the
relative k-th eigengap of matrix A, the famous subspace power method or block Krylov method [14]
solves k-SVD in time O(gap?1 ?k?nnz(A)?log(1/?)) or O(gap?0.5 ?k?nnz(A)?log(1/?)) respectively
if ignoring lower-order terms. Here, nnz(A) is the number of non-zero elements in matrix A, and the
more precise running times are stated in Table 1.
Recently, there are breakthroughs to compute k-SVD faster, from three distinct perspectives.
?
The full version of this paper can be found on https://arxiv.org/abs/1607.03463. This paper is partially
supported by a Microsoft Research Award, no. 0518584, and an NSF grant, no. CCF-1412958.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Paper
subspace PM [19]
Running time
e
O
knnz(A)
?
e
O
knnz(A)
gap
e
O
knnz(A)
?1/2
e
O
e
O
LazySVD
Corollary 4.3 and 4.4
e
O
block Krylov [19]
Shamir [21]
+
k2 d
+
k2 d
?
yes
?
no
?
yes
?
no
gap
+
knnz(A)
gap1/2
+
k2 d
gap
knnz(A)
?1/2
+
+
GF?
?
k2 d
?
knnz(A)
gap1/2
(? for being outperformed)
+
+
k2 d
k3
?3/2
k3
gap3/2
yes
?1/2
k2 d
gap1/2
k4 d
4 gap2
?k
LazySVD
Corollary 4.3 and 4.4
?k
Acc?
no
no
no
yes
no
yes
yes
no
yes
yes
no
(local convergence only)
?
3/4
d
e knd + kn
e knd + kd
O
always ? O
2 ?2
1/2
?k
?k ?1/2
kn3/4 d
e knd + 1/2
e knd + 2kd 2
O
always ? O
1/2
? gap
e knd +
O
Stoc?
gap
no
yes
no
k
All GF results above provide (1 + ?)k?k2 spectral and (1 + ?)k?kF Frobenius guarantees
Table 1: Performance comparison among direct methods. Define gap = (?k ? ?k+1 )/?k ? [0, 1]. GF = Gap
Free; Stoc = Stochastic; Acc = Accelerted. Stochastic results in this table are assuming kai k2 ? 1
following (1.1).
The first breakthrough is the work of Musco and Musco [19] for proving a running time for kSVD that does not depend on singular value gaps (or any other properties) of A. As highlighted
in [19], providing gap-free results was an open question for decades and is a more reliable goal
for practical purposes. Specifically, they proved that the block Krylov method converges in time
2
3
e knnz(A)
O
+ k?d + ?k3/2 , where ? is the multiplicative approximation error.2
?1/2
The second breakthrough is the work of Shamir [21] for providing a fast stochastic k-SVD algorithm.
In a stochastic setting, one assumes3
Pn
d
A is given in form AA> = n1 i=1 ai a>
(1.1)
i and each ai ? R has norm at most 1 .
Instead of repeatedly multiplying matrix AA> to a vector in the (subspace) power method, Shamir
proposed to use a random rank-1 copy ai a>
i to approximate such multiplications. When equipped
with very ad-hoc variance-reduction techniques, Shamir showed that the algorithm has a better (local)
performance than power method (see Table 1). Unfortunately, Shamir?s result is (1) not gap-free; (2)
not accelerated (i.e., does not match the gap?0.5 dependence comparing to block Krylov); and (3)
requires a very accurate warm-start that in principle can take a very long time to compute.
e
The third breakthrough is in obtaining running times of the form O(nnz(A)
+ poly(k, 1/?) ? (n +
d)) [7, 8], see Table 2. We call them NNZ results. To obtain NNZ results, one needs sub-sampling
on the matrix and this incurs a poor dependence on ?. For this reason, the polynomial dependence
on 1/? is usually considered as the most important factor. In 2015, Bhojanapalli et al. [7] obtained
a 1/?2 -rate NNZ result using alternating minimization. Since 1/?2 also shows up in the sampling
complexity, we believe the quadratic dependence on ? is tight among NNZ types of algorithms.
All the cited results rely on ad-hoc non-convex optimization techniques together with matrix algebra,
which make the final proofs complicated. Furthermore, Shamir?s result [21] only works if a 1/poly(d)accurate warm start is given, and the time needed to find a warm start is unclear.
In this paper, we develop a new algorithmic framework to solve k-SVD. It not only improves the
aforementioned breakthroughs, but also relies only on simple convex analysis.
2
e notations to hide possible logarithmic factors on 1/gap, 1/?, n, d, k and potentially
In this paper, we use O
also on ?1 /?k+1 .
3
This normalization follows the tradition of stochastic k-SVD or 1-SVD literatures [12, 20, 21] in order to
state results more cleanly.
2
Paper
[8]
[7]
Running time
O(nnz(A)) + O
k2
(n
?4
e
O(nnz(A)) + O
k5 (?1 /?k )2
(n
?2
e
O(nnz(A)) + O
LazySVD
Theorem 5.1
e
O(nnz(A)) + O
k3
+ d) +
2
4
2
2
k (?1 /?k+1 )
?2
?5
+ d)
d
k (?1 /?k+1 )
?2.5
(n + d)
4
4.5
e k (?1 /?2k+1 ) d
O(nnz(A)) + O
?
Frobenius norm
Spectral norm
(1 + ?)k?kF
(1 + ?)k?kF
(1 + ?)k?kF
k?k2 + ?k?kF
N/A
k?k2 + ?k?kF
N/A
k?k2 + ?k?kF
(1 + ?)k?k2
k?k2 + ?k?kF
Table 2: Performance comparison among O(nnz(A) + poly(1/?)) type of algorithms. Remark: we have not
tried hard to improve the dependency with respect to k or (?1 /?k+1 ). See Remark 5.2.
1.1
Our Results and the Settlement of an Open Question
We propose to use an extremely simple framework that we call LazySVD to solve k-SVD:
LazySVD: perform 1-SVD repeatedly, k times in total.
More specifically, in this framework we first compute the leading singular vector v of A, and then
left-project (I ? vv > )A and repeat this procedure k times. Quite surprisingly,
This seemingly ?most-intuitive? approach was widely considered as ?not a good idea.?
In textbooks and research papers, one typically states that LazySVD has a running time that inversely
depends on all the intermediate singular value gaps ?1 ??2 , . . . , ?k ??k+1 [18, 21]. This dependence
makes the algorithm useless if some singular values are close, and is even thought to be necessary [18].
For this reason, textbooks describe only block methods (such as block power method, block Krylov,
alternating minimization) which find the top k singular vectors together. Musco and Musco [19]
stated as an open question to design ?single-vector? methods without running time dependence on all
the intermediate singular value gaps.
In this paper, we fully answer this open question with novel analyses on this LazySVD framework.
In particular, the resulting running time either
? depends on gap?0.5 where gap is the relative singular value gap only between ?k and ?k+1 , or
? depends on ??0.5 where ? is the approximation ratio (so is gap-free).
Such dependency matches the best known dependency for block methods.
More surprisingly, by making different choices of the 1-SVD subroutine in this LazySVD framework,
we obtain multiple algorithms for different needs (see Table 1 and 2):
? If accelerated gradient descent or Lanczos algorithm is used for 1-SVD, we obtain a faster k-SVD
algorithm than block Krylov [19].
? If a variance-reduction stochastic method is used for 1-SVD, we obtain the first accelerated
stochastic algorithm for k-SVD, and this outperforms Shamir [21].
e
? If one sub-samples A before applying LazySVD, the running time becomes O(nnz(A)
+
?2
? poly(k) ? d). This improves upon [7] in certain (but sufficiently interesting) parameter regimes,
but completely avoids the use of alternating minimization.
Finally, besides the running time advantages above, our analysis is completely based on convex
optimization because 1-SVD is solvable using convex techniques. LazySVD also works when k is
not known to the algorithm, as opposed to block methods which need to know k in advance.
Other Related Work. Some authors focus on the streaming or online model of 1-SVD [4, 15, 17] or
k-SVD [3]. These algorithms are slower than offline methods. Unlike k-SVD, accelerated stochastic
methods were previously known for 1-SVD [12, 13]. After this paper is published, LazySVD has
been generalized to also solve canonical component analysis and generalized PCA by the same
authors [1]. If one is only interested in projecting a vector to the top k-eigenspace without computing
the top k eigenvectors like we do in this paper, this can also be done in an accelerated manner [2].
3
2
Preliminaries
Given a matrix A we denote by kAk2 and kAkF respectively the spectral and Frobenius norms of A.
For q ? 1, we denote by kAkSq the Schatten q-norm of A. We write A B if A, B are symmetric
and A ? B is positive semi-definite (PSD). We denote by ?k (M ) the k-th largest eigenvalue of a
symmetric matrix M , and ?k (A) the k-th largest singular value of a rectangular matrix A.
Since ?k (AA> ) = ?k (A> A) = (?k (A))2 ,
solving k-SVD for A is the same as solving k-PCA for M = AA> .
We denote by ?1 ? ? ? ? ?d ? 0 the singular values of A ? Rd?n , by ?1 ? ? ? ? ?d ? 0 the eigenvalues
of M = AA> ? Rd?d . (Although A may have fewer than d singular values for instance when n < d,
if this happens, we append zeros.) We denote by A?k the best rank-k approximation of A.
We use ? to denote the orthogonal complement of a matrix. More specifically, given a column
orthonormal matrix U ? Rd?k , we define U ? := {x ? Rd | U > x = 0}. For notational simplicity,
we sometimes also denote U ? as a d ? (d ? k) matrix consisting of some basis of U ? .
Theorem 2.1 (approximate matrix inverse). Given d ? d matrix M 0 and constants ?, ? > 0
>
satisfying ?I ? M ?I, one can minimize the quadratic
?
M )x ? b> x in order to
f (x) := x (?I
?1
?1
invert (?I ? M ) b. Suppose the desired accuracy is x ? (?I ? M ) b ? ?. Then,
1/2
?
? Accelerated gradient descent (AGD) produces such an output x in O ??1/2 log ??
iterations, each
requiring O(d) time plus the time needed to multiply M with a vector.
Pn
? If M is given in the form M = n1 i=1 ai a>
i and kai k2 ? 1, then accelerated SVRG (see for
3/4
d?1/4
?
instance [5]) produces such an output x in time O max{nd, n ?1/2
log ??
.
3
A Specific 1-SVD Algorithm: Shift-and-Inverse Revisited
In this section, we study a specific 1-PCA algorithm AppxPCA (recall 1-PCA equals 1-SVD). It is a
(multiplicative-)approximate algorithm for computing the leading eigenvector of a symmetric matrix.
We emphasize that, in principle, most known 1-PCA algorithms (e.g., power method, Lanczos
method) are suitable for our LazySVD framework. We choose AppxPCA solely because it provides
the maximum flexibility in obtaining all stochastic / NNZ running time results at once.
Our AppxPCA uses the shift-and-inverse routine [12, 13], and our pseudo-code in Algorithm 1 is
a modification of Algorithm 5 that appeared in [12]. Since we need a more refined running time
statement with a multiplicative error guarantee, and since the stated proof in [12] is anyways only a
sketched one, we choose to carefully reprove a similar result of [12] and state the following theorem:
Theorem 3.1 (AppxPCA). Let M ? Rd?d be a symmetric matrix with eigenvalues 1 ? ?1 ? ? ? ? ?
?d ? 0 and corresponding eigenvectors u1 , . . . , ud . With probability at least 1 ? p, AppxPCA
produces an output w satisfying
X
(w> ui )2 ? ? and w> M w ? (1 ? ?? )(1 ? ?)?1 .
i?[d],?i ?(1??? )?1
Furthermore, the total number of oracle calls to A is O(log(1/?? )m1 + m2 ), and each time we call
(s)
1
A we have ?min (??(s) I?M ) ? ?12
and ?min (?(s)
? ??12?1 .
I?M )
?
Since AppxPCA reduces 1-PCA to oracle calls of a matrix inversion subroutine A, the stated conditions
?(s)
1
? ?12
and ?min (?(s)
? ??12?1 in Theorem 3.1, together with complexity results
?min (?(s) I?M )
I?M )
?
for matrix inversions (see Theorem 2.1), imply the following running times for AppxPCA:
Corollary 3.2.
1
e 1/2
? If A is AGD, the running time of AppxPCA is O
multiplied with O(d) plus the time needed
??
to multiply M with a vector.
Pn
? If M = n1 i=1 ai a>
i where each kai k2 ? 1, and A is accelerated SVRG, then the total running
n3/4 d
e
time of AppxPCA is O max{nd, 1/4
.
1/2
?1
??
4
(only for proving our theoretical results; for
practitioners, feel free to use your favorite 1-PCA algorithm such as Lanczos to replace AppxPCA.)
Algorithm 1 AppxPCA(A, M, ?? , ?, p)
Input: A, an approximate matrix inversion method; M ? Rd?d , a symmetric matrix satisfying
0 M I; ?? ? (0, 0.5], a multiplicative error; ? ? (0, 1), a numerical accuracy parameter;
and p ?l(0, 1),a confidence
parameter.
running time only logarithmically depends on 1/? and 1/p.
m
l
m
288d
36d
1: m1 ? 4 log p2
, m2 ? log p2 ? ;
2:
3:
4:
5:
6:
7:
8:
9:
10:
m1 = T PM (8, 1/32, p) and m2 = T PM (2, ?/4, p) using definition in Lemma A.1
?? m2
?
?e1 ?
and ?e2 ? 8m
6
2
w
b0 ? a random unit vector; s ? 0; ?(0) ? 1 + ?? ;
repeat
s ? s + 1;
for t = 1 to m1 do
Apply A to find w
bt satisfying
w
bt ? (?(s?1) I ? M )?1 w
bt?1
? ?e1 ;
w?w
bm1 /kw
bm1 k;
Apply A to find v satisfying
v ? (?(s?1) I ? M )?1 w
? ?e1 ;
1
64m1
?(s) ?
?? m1
6
1
2
1
w> v?e
?1
?? ?(s)
3
?
and ?(s) ? ?(s?1) ?
?(s)
2 ;
11: until ?(s) ?
12: f ? s;
13: for t = 1 to m2 do
14:
Apply A to find w
bt satisfying
w
bt ? (?(f ) I ? M )?1 w
bt?1
? ?e2 ;
15: return w := w
bm2 /kw
bm2 k.
Algorithm 2 LazySVD(A, M, k, ?? , ?pca , p)
Input: A, an approximate matrix inversion method; M ? Rd?d , a matrix satisfying 0 M I;
k ? [d], the desired rank; ?? ? (0, 1), a multiplicative error; ?pca ? (0, 1), a numerical accuracy
parameter; and p ? (0, 1), a confidence parameter.
1: M0 ? M and V0 ? [];
2: for s = 1 to k do
3:
vs0 ? AppxPCA(A, Ms?1 , ?? /2, ?pca , p/k);
to practitioners:
1-PCA
vs0
algorithm such as Lanczos to compute
use your favorite
?
>
>
4:
vs ? (I ? Vs?1 Vs?1
)vs0 /
(I ? Vs?1 Vs?1
)vs0
;
project vs0 to Vs?1
5:
Vs ? [Vs?1 , vs ];
6:
Ms ? (I ? vs vs> )Ms?1 (I ? vs vs> )
we also have Ms = (I ? Vs Vs> )M (I ? Vs Vs> )
7: end for
8: return Vk .
4
Main Algorithm and Theorems
Our algorithm LazySVD is stated in Algorithm 2. It starts with M0 = M , and repeatedly applies
k times AppxPCA. In the s-th iteration, it computes an approximate leading eigenvector of matrix
Ms?1 using AppxPCA with a multiplicative error ?? /2, projects Ms?1 to the orthogonal space of this
vector, and then calls it matrix Ms .
In this stated form, LazySVD finds approximately the top k eigenvectors of a symmetric matrix
M ? Rd?d . If M is given as M = AA> , then LazySVD automatically finds the k-SVD of A.
4.1
Our Core Theorems
We state our approximation and running time core theorems of LazySVD below, and then provide
corollaries to translate them into gap-dependent and gap-free theorems on k-SVD.
Theorem 4.1 (approximation). Let M ? Rd?d be a symmetric matrix with eigenvalues 1 ? ?1 ?
? ? ? ?d ? 0 and corresponding eigenvectors u1 , . . . , ud . Let k ? [d], let ?? , p ? (0, 1), and let ?pca ?
5
4
1
poly ?, ?? , d1 , ??k+1
. Then, LazySVD outputs a (column) orthonormal matrix Vk = (v1 , . . . , vk ) ?
d?k
which, with probability at least 1 ? p, satisfies all of the following properties. (Denote by
R
Mk = (I ? Vk Vk> )M (I ? Vk Vk> ).)
(a) Core lemma: kVk> U k2 ? ?, where U = (uj , . . . , ud ) is the (column) orthonormal matrix and j
is the smallest index satisfying ?j ? (1 ? ?? )kMk?1 k2 .
?k+1
(b) Spectral norm guarantee: ?k+1 ? kMk k2 ? 1??
.
?
1
>
(c) Rayleigh quotient guarantee: (1 ? ?? )?k ? vk M vk ? 1??
?k .
?
1/q
(1+?? )2 Pd
q
(d) Schatten-q norm guarantee: for every q ? 1, we have kMk kSq ? (1??
?
.
2
i
i=k+1
?)
We defer the proof of Theorem 4.1 to the full version, and we also have a section in the full version
to highlight the technical ideas behind the proof. Below we state the running time of LazySVD.
Theorem 4.2 (running time). LazySVD can be implemented to run in time
)+k2 d
e knnz(M
? O
if A is AGD and M ? Rd?d is given explicitly;
1/2
??
e
? O
knnz(A)+k2 d
1/2
??
e knd +
? O
if A is AGD and M is given as M = AA> where A ? Rd?n ; or
kn3/4 d
1/4 1/2
?k ??
if A is accelerated SVRG and M =
1
n
Pn
i=1
ai a>
i where each kai k2 ? 1.
e notation hides logarithmic factors with respect to k, d, 1/?? , 1/p, 1/?1 , ?1 /?k .
Above, the O
Proof of Theorem 4.2. We call k times AppxPCA, and each time we can feed Ms?1 = (I ?
>
>
Vs?1 Vs?1
)M (I ? Vs?1 Vs?1
) implicitly into AppxPCA thus the time needed to multiply Ms?1 with
a d-dimensional vector is O(dk + nnz(M )) or O(dk + nnz(A)). Here, the O(dk) overhead is due to
?
the projection of a vector into Vs?1
. This proves the first two running times using Corollary 3.2.
To obtain the third running time, when we compute Ms from Ms?1 , we explicitly project a0i ? (I ?
vs vs> )ai for each vector ai , and feed the new a01 , . . . , a0n into AppxPCA. Now the running time follows
from the second part of Corollary 3.2 together with the fact that kMs?1 k2 ? kMk?1 k2 ? ?k .
4.2
Our Main Results for k-SVD
Our main theorems imply the following corollaries (proved in full version of this paper).
Corollary 4.3 (Gap-dependent k-SVD). Let A ? Rd?n be a matrix with singular values 1 ? ?1 ?
k+1
? ? ? ?d ? 0 and the corresponding left singular vectors u1 , . . . , ud ? Rd . Let gap = ?k ??
be the
?k
relative gap. For fixed ?, p > 0, consider the output
4
2
Vk ? LazySVD A, AA> , k, gap, O k4?(??gap
,p .
4
1 /?k )
Then, defining W = (uk+1 , . . . , ud ), we have with probability at least 1 ? p:
Vk is a rank-k (column) orthonormal matrix with kVk> W k2 ? ? .
2
3/4
d
d
e knnz(A)+k
e knd + kn
?
, or time O
Our running time is O
in the stochastic setting (1.1).
1/2 ?
gap
?k
gap
Above, both running times depend only poly-logarithmically on 1/?.
Corollary 4.4 (Gap-free k-SVD). Let A ? Rd?n be a matrix with singular values 1 ? ?1 ?
? ? ? ?d ? 0. For fixed ?, p > 0, consider the output
6
(v1 , . . . , vk ) = Vk ? LazySVD A, AA> , k, 3? , O k4 d4 (?1?/?k+1 )12 , p .
Then, defining Ak = Vk Vk> A which is a rank k matrix, we have with probability at least 1 ? p:
4
The detailed specifications of ?pca can be found in the appendix where we restate the theorem more formally.
To provide the simplest proof, we have not tightened the polynomial factors in the theoretical upper bound of
?pca because the running time depends only logarithmic on 1/?pca .
6
1. Spectral norm guarantee: kA ? Ak k2 ? (1 + ?)kA ? A?k k2 ;
2. Frobenius norm guarantee: kA ? Ak kF ? (1 + ?)kA ? A?k kF ; and
3. Rayleigh quotient guarantee: ?i ? [k], vi> AA> vi ? ?i2 ? ??i2 .
2
3/4
d
d
e knnz(A)+k
e knd + kn
?
Running time is O
, or time O
in the stochastic setting (1.1).
1/2 ?
?
?k
?
Remark 4.5. The spectral and Frobenius guarantees are standard. The spectral guarantee is more
desirable than the Frobenius one in practice [19]. In fact, our algorithm implies for all q ? 1,
kA ? Ak kSq ? (1 + ?)kA ? A?k kSq where k ? kSq is the Schatten-q norm. Rayleigh-quotient
guarantee was introduced by Musco and Musco [19] for a more refined comparison. They showed
2
that the block Krylov method satisfies |vi> AA> vi ??i2 | ? ??k+1
, which is slightly stronger than ours.
However, these two guarantees are not much different in practice as we evidenced in experiments.
5
NNZ Running Time
In this section, we translate our results in the previous section into the O(nnz(A) + poly(k, 1/?)(n +
d)) running-time statements. The idea is surprisingly simple: we sample either random columns
of A, or random entries of A, and then apply LazySVD to compute the k-SVD. Such translation
directly gives either 1/?2.5 results if AGD is used as the convex subroutine and either column or entry
sampling is used, or a 1/?2 result if accelerated SVRG and column sampling are used together.
We only informally state our theorem and defer all the details to the full paper.
Theorem 5.1 (informal). Let A ? Rd?n be a matrix with singular values ?1 ? ? ? ? ? ?d ? 0.
For every ? ? (0, 1/2), one can apply LazySVD with appropriately chosen ?? on a ?carefully
sub-sampled version? of A. Then, the resulting matrix V ? Rd?k can satisfy
? spectral norm guarantee: kA ? V V > Ak2 ? kA ? A?k k2 + ?kA ? A?k kF ;5
? Frobenius norm guarantee: kA ? V V > AkF ? (1 + ?)kA ? A?k kF .
The total running time depends on (1) whether column or entry sampling is used, (2) which matrix
inversion routine A is used, and (3) whether spectral or Frobenius guarantee is needed. We list our
deduced results in Table 2 and the formal statements can be found in the full version of this paper.
Remark 5.2. The main purpose of our NNZ results is to demonstrate the strength of LazySVD
framework in terms of improving the ? dependency to 1/?2 . Since the 1/?2 rate matches sampling
complexity, it is very challenging have an NNZ result with 1/?2 dependency.6 We have not tried
hard, and believe it possible, to improve the polynomial dependence with respect to k or (?1 /?k+1 ).
6
Experiments
We demonstrate the practicality of our LazySVD framework, and compare it to block power method
or block Krylov method. We emphasize that in theory, the best worse-cast complexity for 1-PCA
is obtained by AppxPCA on top of accelerated SVRG. However, for the size of our chosen datasets,
Lanczos method runs faster than AppxPCA and therefore we adopt Lanczos method as the 1-PCA
method for our LazySVD framework.7
Datasets. We use datasets SNAP/amazon0302, SNAP/email-enron, and news20 that were also
used by Musco and Musco [19], as well as an additional but famous dataset RCV1. The first two can
be found on the SNAP website [16] and the last two can be found on the LibSVM website [11]. The
four datasets give rise sparse matrices of dimensions 257570?262111, 35600?16507, 11269?53975,
and 20242 ? 47236 respectively.
5
This is the best known spectral guarantee one can obtain using NNZ running time [7]. It is an open question
whether the stricter kA ? V V > Ak2 ? (1 + ?)kA ? A?k k2 type of spectral guarantee is possible.
6
On one hand, one can use dimension reduction such as [9] to reduce the problem size to O(k/?2 ); to the best
of our knowledge, it is impossible to obtain any NNZ result faster than 1/?3 using solely dimension reduction.
On the other hand, obtaining 1/?2 dependency was the main contribution of [7]: they relied on alternating
minimization but we have avoided it in our paper.
7
Our LazySVD framework turns every 1-PCA method satisfying Theorem 3.1 (including Lanczos method)
into a k-SVD solver. However, our theoretical results (esp. stochastic and NNZ) rely on AppxPCA because
Lanczos is not a stochastic method.
7
0
10
20
30
40
0
5
10
15
20
25
0
30
5
10
15
20
25
30
1E+0
1E+0
1E-1
1E-1
1E-2
1E-3
1E-2
1E-5
1E-3
1E-7
this paper
Krylov(unstable)
Krylov
PM
0.5
1
1.5
2
2.5
1E-6
1E-8
this paper
(a) amazon, k = 20, spectral
0
1E-4
Krylov(unstable)
Krylov
(b) news, k = 20, spectral
3
0
10
20
30
40
this paper
Krylov
(c)
news, Krylov(unstable)
k = 20, rayleigh
PM
50
60
0
1E-1
1E-1
1E+0
1E-3
1E-3
1E-1
1E-5
1E-5
1E-2
1E-7
1E-7
this paper
Krylov(unstable)
Krylov
(d) email, k = 10, Fnorm
PM
10
20
30
40
PM
50
60
1E-3
this paper
Krylov(unstable)
Krylov
PM
(e) rcv1, k = 30, Fnorm
this paper
Krylov(unstable)
Krylov
PM
(f) rcv1, k = 30, rayleigh(last)
Figure 1: Selected performance plots. Relative error (y-axis) vs. running time (x-axis).
Implemented Algorithms. For the block Krylov method, it is a well-known issue that the Lanczos
type of three-term recurrence update is numerically unstable. This is why Musco and Musco [19]
only used the stable variant of block Krylov which requires an orthogonalization of each n ? k matrix
with respect to all previously obtained n ? k matrices. This greatly improves the numerical stability
albeit sacrificing running time. We implement both these algorithms. In sum, we have implemented:
?
?
?
?
PM: block power method for T iterations.
Krylov: stable block Krylov method for T iterations [19].
Krylov(unstable): the three-term recurrence implementation of block Krylov for T iterations.
LazySVD: k calls of the vanilla Lanczos method, and each call runs T iterations.
A Fair Running-Time Comparison. For a fixed integer T , the four methods go through the dataset
(in terms of multiplying A with column vectors) the same number of times. However, since LazySVD
does not need block orthogonalization (as needed in PM and Krylov) and does not need a (T k)dimensional SVD computation in the end (as needed in Krylov), the running time of LazySVD is
clearly much faster for a fixed value T . We therefore compare the performances of the four methods
in terms of running time rather than T .
We programmed the four algorithms using the same programming language with the same sparsematrix implementation. We tested them single-threaded on the same Intel i7-3770 3.40GHz personal
computer. As for the final low-dimensional SVD decomposition step at the end of the PM or Krylov
method (which is not needed for our LazySVD), we used a third-party library that is built upon the
x64 Intel Math Kernel Library so the time needed for such SVD is maximally reduced.
Performance Metrics. We compute four metrics on the output V = (v1 , . . . , vk ) ? Rn?k :
? Fnorm: relative Frobenius norm error: (kA ? V V > AkF ? kA ? A?k kF )/kA ? A?k kF .
? spectral: relative spectral norm error: (kA ? V V > Ak2 ? kA ? A?k k2 )/kA ? A?k k2 .
2
? rayleigh(last): Rayleigh quotient error relative to ?k+1 : maxkj=1 ?j2 ? vj> AA> vj /?k+1
.
2
2
k
>
>
? rayleigh: relative Rayleigh quotient error: max
? ? v AA vj /? .
j=1
j
j
j
The first three metrics were also used by Musco and Musco [19]. We added the fourth one because
our theory only predicted convergence with respect to the fourth but not the third metric. However,
we observe that in practice they are not much different from each other.
Our Results. We study four datasets each with k = 10, 20, 30 and with the four performance
metrics, totaling 48 plots. Due to space limitation, we only select six representative plots out of 48
and include them in Figure 1. (The full plots can be found in Figure 2, 3, 4 and 5 in the appendix.)
We make the following observations:
? LazySVD outperforms its three competitors almost universally.
? Krylov(unstable) outperforms Krylov for small value T ; however, it is less useful for obtaining
accurate solutions due to its instability. (The dotted green curves even go up if T is large.)
? Subspace power method performs the slowest unsurprisingly due to its lack of acceleration.
8
References
[1] Zeyuan Allen-Zhu and Yuanzhi Li. Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition. ArXiv e-prints, abs/1607.06017, July 2016.
[2] Zeyuan Allen-Zhu and Yuanzhi Li. Faster Principal Component Regression via Optimal
Polynomial Approximation to sgn(x). ArXiv e-prints, abs/1608.04773, August 2016.
[3] Zeyuan Allen-Zhu and Yuanzhi Li. First Efficient Convergence for Streaming k-PCA: a Global,
Gap-Free, and Near-Optimal Rate. ArXiv e-prints, abs/1607.07837, July 2016.
[4] Zeyuan Allen-Zhu and Yuanzhi Li. Follow the Compressed Leader: Faster Algorithm for Matrix
Multiplicative Weight Updates. ArXiv e-prints, abs/1701.01722, January 2017.
[5] Zeyuan Allen-Zhu and Yang Yuan. Improved SVRG for Non-Strongly-Convex or Sum-of-NonConvex Objectives. In ICML, 2016.
[6] Sanjeev Arora, Satish Rao, and Umesh V. Vazirani. Expander flows, geometric embeddings and
graph partitioning. Journal of the ACM, 56(2), 2009.
[7] Srinadh Bhojanapalli, Prateek Jain, and Sujay Sanghavi. Tighter Low-rank Approximation via
Sampling the Leveraged Element. In SODA, pages 902?920, 2015.
[8] Kenneth L. Clarkson and David P. Woodruff. Low rank approximation and regression in input
sparsity time. In STOC, pages 81?90, 2013.
[9] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu.
Dimensionality reduction for k-means clustering and low rank approximation. In STOC, pages
163?172. ACM, 2015.
[10] Petros Drineas and Anastasios Zouzias. A Note on Element-wise Matrix Sparsification via a
Matrix-valued Bernstein Inequality. ArXiv e-prints, abs/1006.0407, January 2011.
[11] Rong-En Fan and Chih-Jen Lin. LIBSVM Data: Classification, Regression and Multi-label.
Accessed: 2015-06.
[12] Dan Garber and Elad Hazan. Fast and simple PCA via convex optimization. ArXiv e-prints,
September 2015.
[13] Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli,
and Aaron Sidford. Robust shift-and-invert preconditioning: Faster and more sample efficient
algorithms for eigenvector computation. In ICML, 2016.
[14] Gene H. Golub and Charles F. Van Loan. Matrix Computations. The JHU Press, 4th edition,
2012.
[15] Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, and Aaron Sidford. Streaming PCA:
Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja?s Algorithm.
In COLT, 2016.
[16] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection.
http://snap.stanford.edu/data, June 2014.
[17] Chris J. Li, Mengdi Wang, Han Liu, and Tong Zhang. Near-Optimal Stochastic Approximation
for Online Principal Component Estimation. ArXiv e-prints, abs/1603.05305, March 2016.
[18] Ren-Cang Li and Lei-Hong Zhang. Convergence of the block lanczos method for eigenvalue
clusters. Numerische Mathematik, 131(1):83?113, 2015.
[19] Cameron Musco and Christopher Musco. Randomized block krylov methods for stronger and
faster approximate singular value decomposition. In NIPS, pages 1396?1404, 2015.
[20] Ohad Shamir. A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate.
In ICML, pages 144?-153, 2015.
[21] Ohad Shamir. Fast stochastic algorithms for svd and pca: Convergence properties and convexity.
In ICML, 2016.
[22] Joel A. Tropp. An Introduction to Matrix Concentration Inequalities. ArXiv e-prints,
abs/1501.01571, January 2015.
9
| 6507 |@word version:6 inversion:5 knd:8 compression:1 norm:18 nd:3 polynomial:4 stronger:2 open:5 cleanly:1 km:1 tried:2 decomposition:4 incurs:1 reduction:6 liu:1 woodruff:1 denoting:1 ours:1 outperforms:4 kmk:4 ka:20 comparing:1 yet:1 numerical:3 plot:4 update:2 v:25 fewer:1 website:2 selected:1 short:1 core:3 provides:1 math:1 revisited:1 org:1 accessed:1 zhang:2 vs0:5 direct:1 ksvd:1 yuan:1 doubly:1 overhead:1 dan:2 manner:1 news20:1 multi:1 chi:2 automatically:1 equipped:1 solver:1 becomes:1 provided:1 spain:1 notation:3 project:4 eigenspace:1 bhojanapalli:3 prateek:2 eigenvector:3 textbook:2 sparsification:1 guarantee:18 pseudo:1 every:3 stricter:1 k2:31 uk:3 partitioning:1 unit:1 grant:1 before:1 positive:1 local:2 esp:1 ak:6 solely:2 approximately:1 plus:2 challenging:1 fastest:1 programmed:1 practical:1 practice:3 block:22 definite:1 implement:1 procedure:1 nnz:27 jhu:1 thought:1 projection:1 matching:1 confidence:2 close:1 andrej:1 noising:1 put:1 applying:1 impossible:1 instability:1 equivalent:1 go:2 convex:7 rectangular:1 musco:19 simplicity:1 amazon:1 numerische:1 m2:5 d1:1 bm1:2 orthonormal:5 proving:2 stability:1 x64:1 feel:1 resp:2 shamir:10 suppose:1 programming:1 us:1 element:3 logarithmically:2 expensive:1 satisfying:9 wang:1 news:2 pd:1 convexity:1 complexity:4 ui:1 personal:1 depend:2 tight:1 solving:2 algebra:1 mengdi:1 upon:2 completely:2 basis:1 drineas:1 preconditioning:1 distinct:1 fast:3 describe:1 jain:2 refined:2 quite:1 garber:2 kai:4 solve:3 widely:1 snap:5 valued:1 elad:2 compressed:1 stanford:2 statistic:1 highlighted:1 final:2 seemingly:1 online:2 hoc:2 advantage:1 rr:1 eigenvalue:5 propose:1 j2:1 translate:2 flexibility:1 intuitive:1 frobenius:10 convergence:7 cluster:1 produce:3 converges:1 develop:1 b0:1 p2:2 netrapalli:2 implemented:3 c:1 predicted:1 implies:1 quotient:5 solves:1 restate:1 stochastic:18 sgn:1 preliminary:1 tighter:1 krevl:1 rong:1 sufficiently:1 considered:2 k3:4 algorithmic:1 m0:2 adopt:1 smallest:1 purpose:2 estimation:1 outperformed:1 label:1 largest:2 tool:2 minimization:6 mit:1 clearly:1 always:2 rather:1 knnz:11 pn:4 agonizing:1 totaling:1 corollary:9 focus:1 june:1 vk:20 notational:1 rank:12 slowest:1 greatly:1 tradition:1 a01:1 dependent:2 streaming:3 typically:1 bt:6 subroutine:3 interested:1 sketched:1 issue:1 among:4 aforementioned:1 classification:1 bm2:2 colt:1 yuanzhil:1 breakthrough:7 ak2:3 equal:1 once:1 sampling:7 kw:2 icml:4 sanghavi:1 lazysvd:36 few:1 oja:1 consisting:1 microsoft:1 n1:3 ab:8 psd:1 multiply:3 joel:1 golub:1 kvk:2 behind:1 a0i:1 accurate:3 partial:1 necessary:1 ohad:2 orthogonal:2 desired:2 sacrificing:1 theoretical:3 leskovec:1 mk:1 instance:2 column:15 rao:1 sidford:2 lanczos:11 entry:4 satish:1 kn:3 dependency:6 answer:1 deduced:1 cited:1 fundamental:1 randomized:1 kn3:2 csail:1 michael:1 together:5 sanjeev:1 opposed:1 choose:2 leveraged:1 worse:1 leading:3 return:2 li:7 de:1 satisfy:1 explicitly:2 ad:2 depends:6 vi:4 multiplicative:7 hazan:2 start:4 relied:1 complicated:1 defer:2 contribution:1 minimize:1 accuracy:3 variance:3 yes:9 famous:2 ren:1 multiplying:2 published:1 acc:2 email:2 definition:1 competitor:1 e2:2 proof:6 petros:1 sampled:1 proved:3 dataset:3 recall:1 knowledge:1 improves:3 dimensionality:1 routine:2 carefully:2 elder:1 feed:2 follow:1 maximally:1 improved:1 done:1 strongly:1 furthermore:2 until:1 hand:2 tropp:1 christopher:2 lack:1 lei:1 believe:2 requiring:1 ccf:1 alternating:6 symmetric:7 i2:3 recurrence:2 d4:1 m:11 generalized:3 hong:1 demonstrate:2 performs:1 allen:6 orthogonalization:2 wise:1 umesh:1 novel:1 recently:2 charles:1 cohen:1 m1:6 numerically:1 ai:8 rd:19 vanilla:1 sujay:1 pm:12 language:1 specification:1 stable:2 han:1 anyways:1 v0:1 showed:2 hide:2 perspective:1 scenario:1 certain:2 nonconvex:1 inequality:2 outperforming:2 additional:1 zeyuan:7 zouzias:1 ud:5 july:2 semi:1 full:7 multiple:1 desirable:1 reduces:1 anastasios:1 sham:2 technical:1 match:3 faster:12 long:1 retrieval:1 lin:1 e1:3 award:1 cameron:3 variant:1 regression:3 vision:1 essentially:2 metric:5 arxiv:9 iteration:6 normalization:1 sometimes:1 kernel:1 invert:2 singular:20 appropriately:1 unlike:1 enron:1 expander:1 flow:1 call:9 practitioner:2 integer:1 near:3 yang:1 intermediate:2 bernstein:2 embeddings:1 reduce:1 idea:3 praneeth:2 shift:3 i7:1 whether:3 six:1 pca:24 eigengap:1 clarkson:1 repeatedly:3 remark:4 useful:1 detailed:1 eigenvectors:4 informally:1 simplest:1 reduced:1 http:2 nsf:1 canonical:1 dotted:1 write:1 four:7 k4:3 libsvm:2 kenneth:1 v1:3 graph:1 sum:2 run:4 inverse:3 powerful:1 fourth:2 soda:1 almost:1 chih:1 appendix:2 bound:1 cca:1 fan:1 quadratic:2 oracle:2 strength:1 your:2 n3:1 u1:3 min:5 extremely:1 rcv1:3 march:1 poor:1 kd:2 slightly:1 sam:1 kakade:2 making:2 happens:1 modification:1 projecting:1 previously:2 mathematik:1 turn:1 needed:9 know:1 end:3 informal:1 decomposing:1 operation:1 k5:1 yuanzhi:5 multiplied:1 apply:5 observe:1 spectral:16 slower:1 top:6 running:36 include:1 clustering:1 madalina:1 practicality:1 uj:1 prof:1 objective:1 question:5 added:1 print:8 concentration:1 dependence:7 kak2:1 diagonal:2 traditional:1 unclear:1 september:1 pain:1 gradient:2 subspace:4 schatten:4 chris:1 threaded:1 unstable:9 reason:2 assuming:1 besides:1 code:1 useless:1 index:1 providing:2 minimizing:1 ratio:1 equivalently:1 unfortunately:1 stoc:4 potentially:1 statement:3 negative:1 stated:6 append:1 rise:1 design:1 implementation:2 perform:1 upper:1 observation:1 datasets:6 finite:1 descent:2 jin:2 january:3 defining:3 precise:1 discovered:2 rn:2 august:1 introduced:1 complement:1 evidenced:1 cast:1 david:1 barcelona:1 akf:2 nip:2 jure:1 krylov:33 usually:2 below:2 regime:3 appeared:1 sparsity:1 built:1 including:2 reliable:1 max:3 green:1 power:8 suitable:1 warm:3 rely:2 solvable:1 advanced:1 zhu:6 improve:3 inversely:1 imply:2 library:2 axis:2 arora:1 gf:3 literature:1 geometric:1 kf:14 multiplication:1 relative:8 unsurprisingly:1 fully:1 kakf:1 highlight:1 interesting:1 limitation:1 eigendecomposition:1 principle:2 tightened:1 translation:1 supported:1 last:3 repeat:2 free:10 copy:1 surprisingly:3 offline:1 svrg:6 formal:1 vv:1 institute:1 sparse:1 ghz:1 van:1 curve:1 dimension:3 settlement:1 avoids:1 computes:1 forward:1 author:2 collection:1 universally:1 avoided:1 party:1 agd:5 cang:1 vazirani:1 approximate:7 emphasize:2 implicitly:1 gene:1 a0n:1 global:1 persu:1 leader:1 decade:1 why:1 table:8 favorite:2 robust:1 ignoring:1 obtaining:4 improving:1 poly:9 diag:2 vj:3 main:5 big:1 edition:1 fair:1 intel:2 representative:1 en:1 tong:1 sub:3 exponential:1 third:4 srinadh:1 theorem:19 specific:2 jen:1 list:1 dk:3 albeit:1 gap:34 logarithmic:3 rayleigh:9 partially:1 applies:1 aa:13 corresponds:1 satisfies:2 relies:1 acm:2 goal:1 acceleration:1 replace:1 hard:2 loan:1 specifically:3 averaging:1 principal:3 lemma:2 called:2 total:4 svd:43 aaron:2 formally:1 select:1 accelerated:13 princeton:3 tested:1 |
6,089 | 6,508 | Statistical Inference for Cluster Trees
Jisu Kim
Department of Statistics
Carnegie Mellon University
Pittsburgh, USA
jisuk1@andrew.cmu.edu
Yen-Chi Chen
Department of Statistics
University of Washington
Seattle, USA
yenchic@uw.edu
Alessandro Rinaldo
Department of Statistics
Carnegie Mellon University
Pittsburgh, USA
arinaldo@stat.cmu.edu
Sivaraman Balakrishnan
Department of Statistics
Carnegie Mellon University
Pittsburgh, USA
siva@stat.cmu.edu
Larry Wasserman
Department of Statistics
Carnegie Mellon University
Pittsburgh, USA
larry@stat.cmu.edu
Abstract
A cluster tree provides a highly-interpretable summary of a density function by
representing the hierarchy of its high-density clusters. It is estimated using the
empirical tree, which is the cluster tree constructed from a density estimator. This
paper addresses the basic question of quantifying our uncertainty by assessing the
statistical significance of topological features of an empirical cluster tree. We first
study a variety of metrics that can be used to compare different trees, analyze their
properties and assess their suitability for inference. We then propose methods to
construct and summarize confidence sets for the unknown true cluster tree. We
introduce a partial ordering on cluster trees which we use to prune some of the
statistically insignificant features of the empirical tree, yielding interpretable and
parsimonious cluster trees. Finally, we illustrate the proposed methods on a variety
of synthetic examples and furthermore demonstrate their utility in the analysis of a
Graft-versus-Host Disease (GvHD) data set.
1
Introduction
Clustering is a central problem in the analysis and exploration of data. It is a broad topic, with several
existing distinct formulations, objectives, and methods. Despite the extensive literature on the topic,
a common aspect of the clustering methodologies that has hindered its widespread scientific adoption
is the dearth of methods for statistical inference in the context of clustering. Methods for inference
broadly allow us to quantify our uncertainty, to discern ?true? clusters from finite-sample artifacts, as
well as to rigorously test hypotheses related to the estimated cluster structure.
In this paper, we study statistical inference for the cluster tree of an unknown density. We assume that
we observe an i.i.d. sample {X1 , . . . , Xn } from a distribution P0 with unknown density p0 . Here,
Xi ? X ? Rd . The connected components C(?), of the upper level set {x : p0 (x) ? ?}, are called
high-density clusters. The set of high-density clusters forms a nested hierarchy which is referred to
as the cluster tree1 of p0 , which we denote as Tp0 .
Methods for density clustering fall broadly in the space of hierarchical clustering algorithms, and
inherit several of their advantages: they allow for extremely general cluster shapes and sizes, and
in general do not require the pre-specification of the number of clusters. Furthermore, unlike flat
1
It is also referred to as the density tree or the level-set tree.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
clustering methods, hierarchical methods are able to provide a multi-resolution summary of the
underlying density. The cluster tree, irrespective of the dimensionality of the input random variable, is
displayed as a two-dimensional object and this makes it an ideal tool to visualize data. In the context
of statistical inference, density clustering has another important advantage over other clustering
methods: the object of inference, the cluster tree of the unknown density p0 , is clearly specified.
In practice, the cluster tree is estimated from a finite sample, {X1 , . . . , Xn } ? p0 . In a scientific
application, we are often most interested in reliably distinguishing topological features genuinely
present in the cluster tree of the unknown p0 , from topological features that arise due to random
fluctuations in the finite sample {X1 , . . . , Xn }. In this paper, we focus our inference on the cluster
tree of the kernel density estimator, Tpbh , where pbh is the kernel density estimator,
n
1 X
kx ? Xi k
pbh (x) =
K
,
(1)
nhd i=1
h
where K is a kernel and h is an appropriately chosen bandwidth 2 .
To develop methods for statistical inference on cluster trees, we construct a confidence set for Tp0 ,
i.e. a collection of trees that will include Tp0 with some (pre-specified) probability. A confidence
set can be converted to a hypothesis test, and a confidence set shows both statistical and scientific
significances while a hypothesis test can only show statistical significances [23, p.155].
To construct and understand the confidence set, we need to solve a few technical and conceptual
issues. The first issue is that we need a metric on trees, in order to quantify the collection of trees
that are in some sense ?close enough? to Tpbh to be statistically indistinguishable from it. We use the
bootstrap to construct tight data-driven confidence sets. However, only some metrics are sufficiently
?regular? to be amenable to bootstrap inference, which guides our choice of a suitable metric on trees.
On the basis of a finite sample, the true density is indistinguishable from a density with additional
infinitesimal perturbations. This leads to the second technical issue which is that our confidence
set invariably contains infinitely complex trees. Inspired by the idea of one-sided inference [9],
we propose a partial ordering on the set of all density trees to define simple trees. To find simple
representative trees in the confidence set, we prune the empirical cluster tree by removing statistically
insignificant features. These pruned trees are valid with statistical guarantees that are simpler than
the empirical cluster tree in the proposed partial ordering.
Our contributions: We begin by considering a variety of metrics on trees, studying their properties
and discussing their suitability for inference. We then propose a method of constructing confidence
sets and for visualizing trees in this set. This distinguishes aspects of the estimated tree correspond
to real features (those present in the cluster tree Tp0 ) from noise features. Finally, we apply our
methods to several simulations, and a Graft-versus-Host Disease (GvHD) data set to demonstrate the
usefulness of our techniques and the role of statistical inference in clustering problems.
Related work: There is a vast literature on density trees (see for instance the book by Klemel? [16]),
and we focus our review on works most closely aligned with our paper. The formal definition of
the cluster tree, and notions of consistency in estimation of the cluster tree date back to the work of
Hartigan [15]. Hartigan studied the efficacy of single-linkage in estimating the cluster tree and showed
that single-linkage is inconsistent when the input dimension d > 1. Several fixes to single-linkage
have since been proposed (see for instance [21]). The paper of Chaudhuri and Dasgupta [4] provided
the first rigorous minimax analysis of the density clustering and provided a computationally tractable,
consistent estimator of the cluster tree. The papers [1, 5, 12, 17] propose various modifications and
analyses of estimators for the cluster tree. While the question of estimation has been extensively
addressed, to our knowledge our paper is the first concerning inference for the cluster tree.
There is a literature on inference for phylogenetic trees (see the papers [13, 10]), but the object of
inference and the hypothesized generative models are typically quite different. Finally, in our paper,
we also consider various metrics on trees. There are several recent works, in the computational
topology literature, that have considered different metrics on trees. The most relevant to our own
work, are the papers [2, 18] that propose the functional distortion metric and the interleaving distance
on trees. These metrics, however, are NP-hard to compute in general. In Section 3, we consider a
variety of computationally tractable metrics and assess their suitability for inference.
2
We address computing the tree Tpbh , and the choice of bandwidth in more detail in what follows.
2
p(x)
p(x)
x
x
Figure 1: Examples of density trees. Black curves are the original density functions and the red trees
are the associated density trees.
2
Background and Definitions
We work with densities defined on a subset X ? Rd , and denote by k.k the Euclidean norm on X .
Throughout this paper we restrict our attention to cluster tree estimators that are specified in terms of
a function f : X 7? [0, ?), i.e. we have the following definition:
Definition 1. For any f : X 7? [0, ?) the cluster tree of f is a function Tf : R 7? 2X , where 2X is
the set of all subsets of X , and Tf (?) is the set of the connected components of the upper-level
set
S
{x ? X : f (x) ? ?}. We define the collection of connected components {Tf }, as {Tf } = Tf (?).
?
As will be clearer in what follows, working only with cluster trees defined via a function f simplifies
our search for metrics on trees, allowing us to use metrics specified in terms of the function f . With a
slight abuse of notation, we will use Tf to denote also {Tf }, and write C ? Tf to signify C ? {Tf }.
The cluster tree Tf indeed has a tree structure, since for every pair C1 , C2 ? Tf , either C1 ? C2 ,
C2 ? C1 , or C1 ? C2 = ? holds. See Figure 1 for a graphical illustration of a cluster tree. The formal
definition of the tree requires some topological theory; these details are in Appendix B.
In the context of hierarchical clustering, we are often interested in the ?height? at which two points or
two clusters merge in the clustering. We introduce the merge height from [12, Definition 6]:
Definition 2. For any two points x, y ? X , any f : X 7? [0, ?), and its tree Tf , their merge height
mf (x, y) is defined as the largest ? such that x and y are in the same density cluster at level ?, i.e.
mf (x, y) = sup {? ? R : there exists C ? Tf (?) such that x, y ? C} .
We refer to the function mf : X ? X 7? R as the merge height function. For any two clusters
C1 , C2 ? {Tf }, their merge height mf (C1 , C2 ) is defined analogously,
mf (C1 , C2 ) = sup {? ? R : there exists C ? Tf (?) such that C1 , C2 ? C} .
One of the contributions of this paper is to construct valid confidence sets for the unknown true
tree and to develop methods for visualizing the trees contained in this confidence set. Formally, we
assume that we have samples {X1 , . . . , Xn } from a distribution P0 with density p0 .
Definition 3. An asymptotic (1 ? ?) confidence set, C? , is a collection of trees with the property that
P0 (Tp0 ? C? ) = 1 ? ? + o(1).
We also provide non-asymptotic upper bounds on the o(1) term in the above definition. Additionally,
we provide methods to summarize the confidence set above. In order to summarize the confidence
set, we define a partial ordering on trees.
Definition 4. For any f, g : X 7? [0, ?) and their trees Tf , Tg , we say Tf Tg if there exists a map
? : {Tf } ? {Tg } such that for any C1 , C2 ? Tf , we have C1 ? C2 if and only if ?(C1 ) ? ?(C2 ).
With Definition 3 and 4, we describe the confidence set succinctly via some of the simplest trees in
the confidence set in Section 4. Intuitively, these are trees without statistically insignificant splits.
It is easy to check that the partial order in Definition 4 is reflexive (i.e. Tf Tf ) and transitive (i.e.
that Tf1 Tf2 and Tf2 Tf3 implies Tf1 Tf3 ). However, to argue that is a partial order, we
need to show the antisymmetry, i.e. Tf Tg and Tg Tf implies that Tf and Tg are equivalent in
some sense. In Appendices A and B, we show an important result: for an appropriate topology on
trees, Tf Tg and Tg Tf implies that Tf and Tf are topologically equivalent.
3
Tp
Tp
x
Tp
x
(a)
(b)
Tq
x
(c)
Tq
x
Tq
x
(d)
(e)
x
(f)
Figure 2: Three illustrations of the partial order in Definition 4. In each case, in agreement with
our intuitive notion of simplicity, the tree on the top ((a), (b), and (c)) is lower than the corresponding
tree on the bottom((d), (e), and (f)) in the partial order, i.e. for each example Tp Tq .
The partial order in Definition 4 matches intuitive notions of the complexity of the tree for several
reasons (see Figure 2). Firstly, Tf Tg implies (number of edges of Tf ) ? (number of edges of Tg )
(compare Figure 2(a) and (d), and see Lemma 6 in Appendix B). Secondly, if Tg is obtained from
Tf by adding edges, then Tf Tg (compare Figure 2(b) and (e), and see Lemma 7 in Appendix B).
Finally, the existence of a topology preserving embedding from {Tf } to {Tg } implies the relationship
Tf Tg (compare Figure 2(c) and (f), and see Lemma 8 in Appendix B).
3
Tree Metrics
In this section, we introduce some natural metrics on cluster trees and study some of their properties
that determine their suitability for statistical inference. We let p, q : X ? [0, ?) be nonnegative
functions and let Tp and Tq be the corresponding trees.
3.1
Metrics
We consider three metrics on cluster trees, the first is the standard `? metric, while the second and
third are metrics that appear in the work of Eldridge et al. [12].
`? metric: The simplest metric is d? (Tp , Tq ) = kp ? qk? = supx?X |p(x) ? q(x)|. We will show
in what follows that, in the context of statistical inference, this metric has several advantages over
other metrics.
Merge distortion metric: The merge distortion metric intuitively measures the discrepancy in the
merge height functions of two trees in Definition 2. We consider the merge distortion metric [12,
Definition 11] defined by
dM (Tp , Tq ) = sup |mp (x, y) ? mq (x, y)|.
x,y?X
The merge distortion metric we consider is a special case of the metric introduced by Eldridge et al.
[12]3 . The merge distortion metric was introduced by Eldridge et al. [12] to study the convergence of
cluster tree estimators. They establish several interesting properties of the merge distortion metric:
in particular, the metric is stable to perturbations in `? , and further, that convergence in the merge
distortion metric strengthens previous notions of convergence of the cluster trees.
Modified merge distortion metric: We also consider the modified merge distortion metric given by
dMM (Tp , Tq ) = sup |dTp (x, y) ? dTq (x, y)|,
x,y?X
where dTp (x, y) = p(x) + p(y) ? 2mp (x, y), which corresponds to the (pseudo)-distance between x
and y along the tree. The metric dMM is used in various proofs in the work of Eldridge et al. [12].
3
They further allow flexibility in taking a sup over a subset of X .
4
It is sensitive to both distortions of the merge heights in Definition 2, as well as of the underlying
densities. Since the metric captures the distortion of distances between points along the tree, it is
in some sense most closely aligned with the cluster tree. Finally, it is worth noting that unlike the
interleaving distance and the functional distortion metric [2, 18], the three metrics we consider in this
paper are quite simple to approximate to a high-precision.
3.2
Properties of the Metrics
The following Lemma gives some basic relationships between the three metrics d? , dM and dMM . We
define pinf = inf x?X p(x), and qinf analogously, and a = inf x?X {p(x) + q(x)} ? 2 min{pinf , qinf }.
Note that when the Lebesgue measure ?(X ) is infinite, then pinf = qinf = a = 0.
Lemma 1. For any densities p and q, the following relationships hold: (i) When p and q are
continuous, then d? (Tp , Tq ) = dM (Tp , Tq ). (ii) dMM (Tp , Tq ) ? 4d? (Tp , Tq ). (iii) dMM (Tp , Tq ) ?
d? (Tp , Tq ) ? a, where a is defined as above. Additionally when ?(X ) = ?, then dMM (Tp , Tq ) ?
d? (Tp , Tq ).
The proof is in Appendix F. From Lemma 1, we can see that under a mild assumption (continuity of
the densities), d? and dM are equivalent. We note again that the work of Eldridge et al. [12] actually
defines a family of merge distortion metrics, while we restrict our attention to a canonical one. We
can also see from Lemma 1 that while the modified merge metric is not equivalent to d? , it is usually
multiplicatively sandwiched by d? .
Our next line of investigation is aimed at assessing the suitability of the three metrics for the task
of statistical inference. Given the strong equivalence of d? and dM we focus our attention on d?
and dMM . Based on prior work (see [7, 8]), the large sample behavior of d? is well understood. In
particular, d? (Tpbh , Tp0 ) converges to the supremum of an appropriate Gaussian process, on the basis
of which we can construct confidence intervals for the d? metric.
The situation for the metric dMM is substantially more subtle. One of our eventual goals is to use
the non-parametric bootstrap to construct valid estimates of the confidence set. In general, a way to
assess the amenability of a functional to the bootstrap is via Hadamard differentiability [24]. Roughly
speaking, Hadamard-differentiability is a type of statistical stability, that ensures that the functional
under consideration is stable to perturbations in the input distribution. In Appendix C, we formally
define Hadamard differentiability and prove that dMM is not point-wise Hadamard differentiable.
This does not completely rule out the possibility of finding a way to construct confidence sets based
on dMM , but doing so would be difficult and so far we know of no way to do it.
In summary, based on computational considerations we eliminate the interleaving distance and
the functional distortion metric [2, 18], we eliminate the dMM metric based on its unsuitability for
statistical inference and focus the rest of our paper on the d? (or equivalently dM ) metric which is
both computationally tractable and has well understood statistical behavior.
4
Confidence Sets
In this section, we consider the construction of valid confidence intervals centered around the kernel
density estimator, defined in Equation (1). We first observe that a fixed bandwidth for the KDE
gives a dimension-free rate of convergence for estimating a cluster tree. For estimating a density
in high dimensions, the KDE has a poor rate of convergence, due to a decreasing bandwidth for
simultaneously optimizing the bias and the variance of the KDE.
When estimating a cluster tree, the bias of the KDE does not affect its cluster tree. Intuitively, the
cluster tree is a shape characteristic of a function, which is not affected by the bias. Defining the
biased density, ph (x) = E[b
ph (x)], two cluster trees from ph and the true density p0 are equivalent
with respect to the topology in Appendix A, if h is small enough and p0 is regular enough:
Lemma 2. Suppose that the true unknown density p0 , has no non-degenerate critical points 4 , then
there exists a constant h0 > 0 such that for all 0 < h ? h0 , the two cluster trees, Tp0 and Tph have
the same topology in Appendix A.
4
The Hessian of p0 at every critical point is non-degenerate. Such functions are known as Morse functions.
5
From Lemma 2, proved in Appendix G, a fixed bandwidth for the KDE can be applied to give a
dimension-free rate of convergence for estimating the cluster tree. Instead of decreasing bandwidth h
and inferring the cluster tree of the true density Tp0 at rate OP (n?2/(4+d) ), Lemma 2 implies that we
can fix h > 0 and infer the cluster tree of the biased density Tph at rate OP (n?1/2 ) independently of
the dimension. Hence a fixed bandwidth crucially enhances the convergence rate of the proposed
methods in high-dimensional settings.
4.1
A data-driven confidence set
We recall that we base our inference on the d? metric, and we recall the definition of a valid
confidence set (see Definition 3). As a conceptual first step, suppose that for a specified value ? we
could compute the 1 ? ? quantile of the distribution of d? (Tpbh , Tph ), and denote this value t? . Then
a valid confidence set for the unknown Tph is C? = {T : d? (T, Tpbh ) ? t? }. To estimate t? , we use
e 1, ? ? ? , X
en1 }, . . . , {X
eB, ? ? ? , X
enB },
the bootstrap. Specifically, we generate B bootstrap samples, {X
1
1
by sampling with replacement from the original sample. On each bootstrap sample, we compute
the KDE, and the associated cluster tree. We denote the cluster trees {Tep1h , . . . , TepBh }. Finally, we
estimate t? by
n
1 X
?1
b
b
b
t? = F (1 ? ?), where F (s) =
I(d? (Tepih , Tpbh ) < s).
B i=1
b? = {T : d? (T, Tbh ) ? b
t? }. Using techniques from [8, 7],
Then the data-driven confidence set is C
the following can be shown (proof omitted):
Theorem 3. Under mild regularity conditions on the kernel5 , we have that the constructed confidence
set is asymptotically valid and satisfies,
7 1/6
b? = 1 ? ? + O log n
P Th ? C
.
nhd
Hence our data-driven confidence set is consistent at dimension independent rate. When h is a fixed
small constant, Lemma 2 implies that Tp0 and Tph have the same topology, and Theorem 3 guarantees
that the non-parametric bootstrap is consistent at a dimension independent O(((log n)7 /n)1/6 ) rate.
For reasons explained in [8], this rate is believed to be optimal.
4.2
Probing the Confidence Set
b? is an infinite set with a complex structure. Infinitesimal perturbations of the
The confidence set C
density estimate are in our confidence set and so this set contains very complex trees. One way to
understand the structure of the confidence set is to focus attention on simple trees in the confidence
set. Intuitively, these trees only contain topological features (splits and branches) that are sufficiently
strongly supported by the data.
We propose two pruning schemes to find trees, that are simpler than the empirical tree Tpbh that are in
the confidence set. Pruning the empirical tree aids visualization as well as de-noises the empirical
tree by eliminating some features that arise solely due to the stochastic variability of the finite-sample.
The algorithms are (see Figure 3):
1. Pruning only leaves: Remove all leaves of length less than 2b
t? (Figure 3(b)).
2. Pruning leaves and internal branches: In this case, we first prune the leaves as above. This
yields a new tree. Now we again prune (using cumulative length) any leaf of length less than 2b
t? . We
continue iteratively until all remaining leaves are of cumulative length larger than 2b
t? (Figure 3(c)).
In Appendix D.2 we formally define the pruning operation and show the following. The remaining
tree Te after either of the above pruning operations satisfies: (i) Te Tpbh , (ii) there exists a function f
b? (see Lemma 10 in Appendix D.2). In other words, we identified a
whose tree is Te, and (iii) Te ? C
valid tree with a statistical guarantee that is simpler than the original estimate Tpbh . Intuitively, some
of the statistically insignificant features have been removed from Tpbh . We should point out, however,
5
See Appendix D.1 for details.
6
L1
L2
L6
L5
L3 L4
E5
E4
E1
E3
E2
(a) The empirical tree.
(b) Pruning only leaves.
(c) Pruning leaves and branches.
Figure 3: Illustrations of our two pruning strategies. (a) shows the empirical tree. In (b), leaves that
are insignificant are pruned, while in (c), insignificant internal branches are further pruned top-down.
(a)
(b)
(c)
?
?
0.2
0.4
0.6
(d)
0.8
1.0
lambda
0
lambda
0
0.208 0.272
0
lambda
?
0.0
?
0.07
0.291
?
Yingyang data, alpha = 0.05
0.035 0.044 0.052
?
0.255
Mickey mouse data, alpha = 0.05
0.529
Ring data, alpha = 0.05
0.0
0.2
0.4
0.6
(e)
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
(f)
Figure 4: Simulation examples. (a) and (d) are the ring data; (b) and (e) are the mickey mouse data;
(c) and (f) are the yingyang data. The solid lines are the pruned trees; the dashed lines are leaves (and
edges) removed by the pruning procedure. A bar of length 2b
t? is at the top right corner. The pruned
trees recover the actual structure of connected components.
b? . Ideally, we would like to
that there may exist other trees that are simpler than Tpbh that are in C
have an algorithm that identifies all trees in the confidence set that are minimal with respect to the
partial order in Definition 4. This is an open question that we will address in future work.
5
Experiments
In this section, we demonstrate the techniques we have developed for inference on synthetic data, as
well as on a real dataset.
5.1
Simulated data
We consider three simulations: the ring data (Figure 4(a) and (d)), the Mickey Mouse data (Figure 4(b)
and (e)), and the yingyang data (Figure 4(c) and (f)). The smoothing bandwidth is chosen by the
Silverman reference rule [20] and we pick the significance level ? = 0.05.
7
4e?10
?
?
0e+00
0e+00
1e?10
2e?10
2e?10
4e?10
3e?10
6e?10
8e?10
?
?
0.0
0.2
0.4
0.6
0.8
1.0
0.0
(a) The positive treatment data.
0.2
0.4
0.6
0.8
1.0
(b) The control data.
Figure 5: The GvHD data. The solid brown lines are the remaining branches after pruning; the blue
dashed lines are the pruned leaves (or edges). A bar of length 2b
t? is at the top right corner.
Example 1: The ring data. (Figure 4(a) and (d)) The ring data consists of two structures: an outer
ring and a center node. The outer circle consists of 1000 points and the central node contains 200
points. To construct the tree, we used h = 0.202.
Example 2: The Mickey Mouse data. (Figure 4(b) and (e)) The Mickey Mouse data has three
components: the top left and right uniform circle (400 points each) and the center circle (1200 points).
In this case, we select h = 0.200.
Example 3: The yingyang data. (Figure 4(c) and (f)) This data has 5 connected components: outer
ring (2000 points), the two moon-shape regions (400 points each), and the two nodes (200 points
each). We choose h = 0.385.
Figure 4 shows those data ((a), (b), and (c)) along with the pruned density trees (solid parts in (d), (e),
and (f)). Before pruning the tree (both solid and dashed parts), there are more leaves than the actual
number of connected components. But after pruning (only the solid parts), every leaf corresponds to
an actual connected component. This demonstrates the power of a good pruning procedure.
5.2
GvHD dataset
Now we apply our method to the GvHD (Graft-versus-Host Disease) dataset [3]. GvHD is a
complication that may occur when transplanting bone marrow or stem cells from one subject to
another [3]. We obtained the GvHD dataset from R package ?mclust?. There are two subsamples: the
control sample and the positive (treatment) sample. The control sample consists of 9083 observations
and the positive sample contains 6809 observations on 4 biomarker measurements (d = 4). By the
normal reference rule [20], we pick h = 39.1 for the positive sample and h = 42.2 for the control
sample. We set the significance level ? = 0.05.
Figure 5 shows the density trees in both samples. The solid brown parts are the remaining components
of density trees after pruning and the dashed blue parts are the branches removed by pruning. As can
be seen, the pruned density tree of the positive sample (Figure 5(a)) is quite different from the pruned
tree of the control sample (Figure 5(b)). The density function of the positive sample has fewer bumps
(2 significant leaves) than the control sample (3 significant leaves). By comparing the pruned trees,
we can see how the two distributions differ from each other.
6
Discussion
There are several open questions that we will address in future work. First, it would be useful to have
an algorithm that can find all trees in the confidence set that are minimal with respect to the partial
order . These are the simplest trees consistent with the data. Second, we would like to find a way
to derive valid confidence sets using the metric dMM which we view as an appealing metric for tree
inference. Finally, we have used the Silverman reference rule [20] for choosing the bandwidth but we
would like to find a bandwidth selection method that is more targeted to tree inference.
8
References
[1] S. Balakrishnan, S. Narayanan, A. Rinaldo, A. Singh, and L. Wasserman. Cluster trees on manifolds. In
Advances in Neural Information Processing Systems, 2012.
[2] U. Bauer, E. Munch, and Y. Wang. Strong equivalence of the interleaving and functional distortion metrics
for reeb graphs. In 31st International Symposium on Computational Geometry (SoCG 2015), volume 34,
pages 461?475. Schloss Dagstuhl?Leibniz-Zentrum fuer Informatik, 2015.
[3] R. R. Brinkman, M. Gasparetto, S.-J. J. Lee, A. J. Ribickas, J. Perkins, W. Janssen, R. Smiley, and
C. Smith. High-content flow cytometry and temporal data analysis for defining a cellular signature of
graft-versus-host disease. Biology of Blood and Marrow Transplantation, 13(6):691?700, 2007.
[4] K. Chaudhuri and S. Dasgupta. Rates of convergence for the cluster tree. In Advances in Neural Information
Processing Systems, pages 343?351, 2010.
[5] K. Chaudhuri, S. Dasgupta, S. Kpotufe, and U. von Luxburg. Consistent procedures for cluster tree
estimation and pruning. IEEE Transactions on Information Theory, 2014.
[6] F. Chazal, B. T. Fasy, F. Lecci, B. Michel, A. Rinaldo, and L. Wasserman. Robust topological inference:
Distance to a measure and kernel distance. arXiv preprint arXiv:1412.7197, 2014.
[7] Y.-C. Chen, C. R. Genovese, and L. Wasserman. Density level sets: Asymptotics, inference, and visualization. arXiv:1504.05438, 2015.
[8] V. Chernozhukov, D. Chetverikov, and K. Kato. Central limit theorems and bootstrap in high dimensions.
Annals of Probability, 2016.
[9] D. Donoho. One-sided inference about functionals of a density. The Annals of Statistics, 16(4):1390?1420,
1988.
[10] B. Efron, E. Halloran, and S. Holmes. Bootstrap confidence levels for phylogenetic trees. Proceedings of
the National Academy of Sciences, 93(23), 1996.
[11] U. Einmahl and D. M. Mason. Uniform in bandwidth consistency of kernel-type function estimators. The
Annals of Statistics, 33(3):1380?1403, 2005.
[12] J. Eldridge, M. Belkin, and Y. Wang. Beyond hartigan consistency: Merge distortion metric for hierarchical
clustering. In Proceedings of The 28th Conference on Learning Theory, pages 588?606, 2015.
[13] J. Felsenstein. Confidence limits on phylogenies, a justification. Evolution, 39, 1985.
[14] C. R. Genovese, M. Perone-Pacifico, I. Verdinelli, and L. Wasserman. Nonparametric ridge estimation.
The Annals of Statistics, 42(4):1511?1545, 2014.
[15] J. A. Hartigan. Consistency of single linkage for high-density clusters. Journal of the American Statistical
Association, 1981.
[16] J. Klemel?. Smoothing of multivariate data: density estimation and visualization, volume 737. John Wiley
& Sons, 2009.
[17] S. Kpotufe and U. V. Luxburg. Pruning nearest neighbor cluster trees. In Proceedings of the 28th
International Conference on Machine Learning (ICML-11), pages 225?232, 2011.
[18] D. Morozov, K. Beketayev, and G. Weber. Interleaving distance between merge trees. Discrete and
Computational Geometry, 49:22?45, 2013.
[19] D. W. Scott. Multivariate density estimation: theory, practice, and visualization. John Wiley & Sons,
2015.
[20] B. W. Silverman. Density estimation for statistics and data analysis, volume 26. CRC press, 1986.
[21] W. Stuetzle and R. Nugent. A generalized single linkage method for estimating the cluster tree of a density.
Journal of Computational and Graphical Statistics, 19(2), 2010.
[22] L. Wasserman. All of nonparametric statistics. Springer Science & Business Media, 2006.
[23] L. Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer Science & Business
Media, 2010. ISBN 1441923225, 9781441923226.
[24] J. Wellner. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Science
& Business Media, 2013.
9
| 6508 |@word mild:2 eliminating:1 norm:1 open:2 simulation:3 crucially:1 p0:14 pick:2 concise:1 solid:6 contains:4 efficacy:1 pbh:2 existing:1 comparing:1 john:2 dtq:1 shape:3 remove:1 interpretable:2 generative:1 leaf:15 fewer:1 smith:1 provides:1 node:3 complication:1 firstly:1 simpler:4 phylogenetic:2 height:7 along:3 constructed:2 c2:11 symposium:1 prove:1 consists:3 introduce:3 indeed:1 roughly:1 behavior:2 multi:1 chi:1 inspired:1 decreasing:2 actual:3 considering:1 spain:1 begin:1 underlying:2 estimating:6 provided:2 notation:1 mickey:5 medium:3 what:3 substantially:1 developed:1 finding:1 guarantee:3 pseudo:1 temporal:1 every:3 demonstrates:1 control:6 appear:1 positive:6 before:1 understood:2 limit:2 despite:1 fluctuation:1 solely:1 abuse:1 merge:20 black:1 eb:1 studied:1 equivalence:2 dtp:2 statistically:5 adoption:1 en1:1 practice:2 silverman:3 bootstrap:10 procedure:3 stuetzle:1 asymptotics:1 empirical:11 confidence:38 pre:2 regular:2 word:1 einmahl:1 close:1 selection:1 context:4 equivalent:5 map:1 center:2 attention:4 independently:1 resolution:1 simplicity:1 wasserman:7 estimator:9 rule:4 holmes:1 mq:1 embedding:1 stability:1 notion:4 justification:1 annals:4 hierarchy:2 construction:1 suppose:2 distinguishing:1 hypothesis:3 agreement:1 strengthens:1 genuinely:1 bottom:1 role:1 preprint:1 wang:2 capture:1 tf2:2 region:1 ensures:1 connected:7 ordering:4 removed:3 alessandro:1 disease:4 graft:4 dagstuhl:1 complexity:1 ideally:1 rigorously:1 signature:1 singh:1 tight:1 basis:2 completely:1 various:3 distinct:1 describe:1 kp:1 choosing:1 h0:2 quite:3 whose:1 larger:1 solve:1 distortion:17 say:1 transplantation:1 statistic:13 subsamples:1 advantage:3 differentiable:1 isbn:1 propose:6 aligned:2 relevant:1 hadamard:4 date:1 kato:1 chaudhuri:3 flexibility:1 degenerate:2 academy:1 intuitive:2 seattle:1 convergence:9 cluster:61 regularity:1 assessing:2 converges:1 ring:7 object:3 illustrate:1 andrew:1 develop:2 stat:3 clearer:1 derive:1 nearest:1 op:2 strong:2 implies:7 quantify:2 differ:1 amenability:1 closely:2 stochastic:1 exploration:1 centered:1 larry:2 crc:1 require:1 fix:2 suitability:5 investigation:1 secondly:1 hold:2 sufficiently:2 considered:1 around:1 normal:1 visualize:1 bump:1 omitted:1 estimation:7 chernozhukov:1 sivaraman:1 sensitive:1 largest:1 tf:34 tool:1 clearly:1 gaussian:1 modified:3 focus:5 check:1 biomarker:1 rigorous:1 kim:1 sense:3 inference:29 typically:1 eliminate:2 interested:2 issue:3 smoothing:2 special:1 construct:9 washington:1 sampling:1 biology:1 broad:1 icml:1 genovese:2 discrepancy:1 future:2 np:1 few:1 distinguishes:1 belkin:1 simultaneously:1 national:1 zentrum:1 geometry:2 lebesgue:1 replacement:1 tq:16 invariably:1 highly:1 possibility:1 yielding:1 amenable:1 edge:5 partial:11 tree:126 euclidean:1 tf3:2 circle:3 minimal:2 instance:2 tp:16 tg:14 reflexive:1 subset:3 uniform:2 usefulness:1 munch:1 supx:1 synthetic:2 st:1 density:48 international:2 klemel:2 morozov:1 l5:1 lee:1 analogously:2 mouse:5 again:2 central:3 von:1 choose:1 lambda:3 corner:2 book:1 american:1 michel:1 converted:1 de:1 mp:2 bone:1 view:1 analyze:1 sup:5 red:1 doing:1 recover:1 yen:1 ass:3 contribution:2 moon:1 qk:1 variance:1 characteristic:1 correspond:1 yield:1 weak:1 informatik:1 worth:1 chazal:1 definition:20 infinitesimal:2 dm:6 e2:1 associated:2 proof:3 proved:1 dataset:4 treatment:2 recall:2 knowledge:1 efron:1 siva:1 dimensionality:1 subtle:1 actually:1 back:1 methodology:1 formulation:1 strongly:1 furthermore:2 tf1:2 dmm:12 until:1 working:1 widespread:1 continuity:1 defines:1 artifact:1 scientific:3 usa:5 hypothesized:1 contain:1 true:7 brown:2 evolution:1 hence:2 iteratively:1 visualizing:2 indistinguishable:2 generalized:1 ridge:1 demonstrate:3 l1:1 weber:1 wise:1 consideration:2 common:1 functional:6 volume:3 association:1 slight:1 mellon:4 refer:1 measurement:1 significant:2 rd:2 consistency:4 l3:1 specification:1 stable:2 base:1 multivariate:2 own:1 showed:1 recent:1 optimizing:1 inf:2 driven:4 yingyang:4 continue:1 discussing:1 preserving:1 seen:1 additional:1 fasy:1 prune:4 determine:1 schloss:1 dashed:4 ii:2 branch:6 infer:1 stem:1 technical:2 match:1 believed:1 host:4 concerning:1 e1:1 basic:2 cmu:4 metric:52 arxiv:3 kernel:6 cell:1 c1:11 background:1 signify:1 unsuitability:1 addressed:1 interval:2 appropriately:1 biased:2 rest:1 unlike:2 subject:1 balakrishnan:2 inconsistent:1 flow:1 noting:1 ideal:1 split:2 enough:3 easy:1 iii:2 variety:4 affect:1 fuer:1 bandwidth:11 topology:6 hindered:1 restrict:2 idea:1 simplifies:1 identified:1 utility:1 linkage:5 wellner:1 e3:1 speaking:1 hessian:1 useful:1 aimed:1 nonparametric:2 extensively:1 ph:3 narayanan:1 differentiability:3 simplest:3 nugent:1 generate:1 exist:1 canonical:1 estimated:4 blue:2 broadly:2 carnegie:4 write:1 dasgupta:3 discrete:1 affected:1 blood:1 hartigan:4 uw:1 vast:1 asymptotically:1 graph:1 luxburg:2 package:1 uncertainty:2 topologically:1 discern:1 throughout:1 family:1 parsimonious:1 leibniz:1 appendix:13 bound:1 topological:6 nonnegative:1 occur:1 smiley:1 perkins:1 flat:1 aspect:2 extremely:1 min:1 pruned:10 department:5 lecci:1 poor:1 felsenstein:1 son:2 arinaldo:1 appealing:1 modification:1 intuitively:5 explained:1 sided:2 socg:1 computationally:3 equation:1 visualization:4 know:1 tractable:3 studying:1 operation:2 apply:2 observe:2 hierarchical:4 appropriate:2 existence:1 original:3 top:5 clustering:13 include:1 remaining:4 graphical:2 l6:1 pacifico:1 quantile:1 establish:1 sandwiched:1 objective:1 question:4 parametric:2 strategy:1 enhances:1 distance:8 simulated:1 outer:3 topic:2 manifold:1 argue:1 cellular:1 reason:2 length:6 relationship:3 illustration:3 multiplicatively:1 equivalently:1 difficult:1 kde:6 reliably:1 unknown:8 kpotufe:2 allowing:1 upper:3 observation:2 finite:5 displayed:1 situation:1 defining:2 variability:1 antisymmetry:1 perturbation:4 cytometry:1 introduced:2 pair:1 specified:5 extensive:1 barcelona:1 nip:1 address:4 able:1 bar:2 beyond:1 usually:1 scott:1 summarize:3 power:1 suitable:1 critical:2 natural:1 business:3 tp0:9 brinkman:1 representing:1 minimax:1 scheme:1 identifies:1 irrespective:1 transitive:1 review:1 literature:4 prior:1 l2:1 asymptotic:2 morse:1 interesting:1 versus:4 consistent:5 succinctly:1 summary:3 course:1 supported:1 free:2 guide:1 allow:3 understand:2 formal:2 bias:3 fall:1 neighbor:1 taking:1 bauer:1 curve:1 dimension:8 xn:4 valid:9 cumulative:2 collection:4 far:1 dearth:1 transaction:1 functionals:1 approximate:1 pruning:18 alpha:3 supremum:1 nhd:2 conceptual:2 pittsburgh:4 xi:2 tree1:1 search:1 continuous:1 additionally:2 robust:1 e5:1 halloran:1 complex:3 constructing:1 marrow:2 inherit:1 significance:5 eldridge:6 noise:2 arise:2 x1:4 referred:2 representative:1 probing:1 aid:1 precision:1 wiley:2 inferring:1 chetverikov:1 third:1 interleaving:5 removing:1 theorem:3 e4:1 down:1 mason:1 insignificant:6 reeb:1 exists:5 janssen:1 adding:1 perone:1 te:4 kx:1 chen:2 mf:5 infinitely:1 rinaldo:3 contained:1 springer:3 nested:1 corresponds:2 satisfies:2 goal:1 targeted:1 quantifying:1 donoho:1 eventual:1 content:1 hard:1 infinite:2 specifically:1 lemma:12 called:1 verdinelli:1 l4:1 formally:3 select:1 phylogeny:1 internal:2 |
6,090 | 6,509 | Deep Learning for Predicting
Human Strategic Behavior
Jason Hartford, James R. Wright, Kevin Leyton-Brown
Department of Computer Science
University of British Columbia
{jasonhar, jrwright, kevinlb}@cs.ubc.ca
Abstract
Predicting the behavior of human participants in strategic settings is an important
problem in many domains. Most existing work either assumes that participants
are perfectly rational, or attempts to directly model each participant?s cognitive
processes based on insights from cognitive psychology and experimental economics.
In this work, we present an alternative, a deep learning approach that automatically
performs cognitive modeling without relying on such expert knowledge. We
introduce a novel architecture that allows a single network to generalize across
different input and output dimensions by using matrix units rather than scalar units,
and show that its performance significantly outperforms that of the previous state
of the art, which relies on expert-constructed features.
1
Introduction
Game theory provides a powerful framework for the design and analysis of multiagent systems
that involve strategic interactions [see, e.g., 16]. Prominent examples of such systems include
search engines, which use advertising auctions to generate a significant portion of their revenues
and rely on game theoretic reasoning to analyze and optimize these mechanisms [6, 20]; spectrum
auctions, which rely on game theoretic analysis to carefully design the ?rules of the game? in order to
coordinate the reallocation of valuable radio spectrum [13]; and security systems, which analyze the
allocation of security personnel as a game between rational adversaries in order to optimize their use
of scarce resources [19]. In such applications, system designers optimize their choices with respect
to assumptions about the preferences, beliefs and capabilities of human players [14]. A standard
game theoretic approach is to assume that players are perfectly rational expected utility maximizers
and indeed, that they have common knowledge of this. In some applications, such as the high-stakes
spectrum auctions just mentioned, this assumption is probably reasonable, as participants are typically
large companies that hire consultants to optimize their decision making. In other scenarios that
allow less time for planning or involve less sophisticated participants, however, the perfect rationality
assumption may lead to suboptimal system designs. For example, Yang et al. [24] were able to
improve the performance of systems that defend against adversaries in security games by relaxing the
perfect rationality assumption. Of course, relaxing this assumption means finding something else to
replace it with: an accurate model of boundedly rational human behavior.
The behavioral game theory literature has developed a wide range of models for predicting human behavior in strategic settings by incorporating cognitive biases and limitations derived from
observations of play and insights from cognitive psychology [2]. Like much previous work, we
study the unrepeated, simultaneous-move setting, for two reasons. First, the setting is conceptually
straightforward: games can be represented in a so-called ?normal form?, simply by listing the utilities
to each player in for each combination of their actions (e.g., see Figure 1). Second, the setting is
surprisingly general: auctions, security systems, and many other interactions can be modeled naturally
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
R
C
L
T
10,10
3,5
18,8
T
M
5,3
20,20 0,25
M
B
8,18
25,0 15,15
B
Counts of Actions
0
5
10
15
20
25
30
Figure 1: An example 3 ? 3 normal form
game. The row player chooses from actions
{T, M, B} and the column player chooses
from actions {R, C, L}. If the row player
played action T and column player played action C, their resulting payoffs would be 3 and
5 respectively. Given such a matrix as input
we aim to predict a distribution over the row
player?s choice of actions defined by the observed frequency of actions shown on the right.
as normal form games. The most successful predictive models for this setting combine notions of
iterative reasoning and noisy best response [21] and use hand-crafted features to model the behavior
of non-strategic players [23].
The recent success of deep learning has demonstrated that predictive accuracy can often be enhanced,
and expert feature engineering dispensed with, by fitting highly flexible models that are capable of
learning novel representations. A key feature in successful deep models is the use of careful design
choices to encode ?basic domain knowledge of the input, in particular its topological structure. . . to
learn better features" [1, emphasis original]. For example, feed-forward neural nets can, in principle,
represent the same functions as convolution networks, but the latter tend to be more effective in
vision applications because they encode the prior that low-level features should be derived from the
pixels within a small neighborhood and that predictions should be invariant to small input translations.
Analogously, Clark and Storkey [4] encoded the fact that a Go board is invariant to rotations. These
modeling choices constrain more general architectures to a subset of the solution space that is likely
to contain good solutions. Our work seeks to do the same for the behavioral game theory setting,
identifying novel prior assumptions that extend deep learning to predicting behavior in strategic
scenarios encoded as two player, normal-form games.
A key property required of such a model is invariance to game size: a model must be able to take
as input an m ? n bimatrix game (i.e., two m ? n matrices encoding the payoffs of players 1 and
2 respectively) and output an m-dimensional probability distribution over player 1?s actions, for
arbitrary values of n and m, including values that did not appear in training data. In contrast, existing
deep models typically assume either a fixed-dimensional input or an arbitrary-length sequence of
fixed-dimensional inputs, in both cases with a fixed-dimensional output. We also have the prior belief
that permuting rows and columns in the input (i.e., changing the order in which actions are presented
to the players) does not change the output beyond a corresponding permutation. In Section 3, we
present an architecture that operates on matrices using scalar weights to capture invariance to changes
in the size of the input matrices and to permutations of its rows and columns. In Section 4 we evaluate
our model?s ability to predict distributions of play given normal form descriptions of games on a
dataset of experimental data from a variety of experiments, and find that our feature-free deep learning
model significantly exceeds the performance of the current state-of-the-art model, which has access
to hand-tuned features based on expert knowledge [23].
2
Related Work
Prediction in normal form games. The task of predicting actions in normal form games has been
studied mostly in the behavioral game theory literature. Such models tend to have few parameters and
to aim to describe previously identified cognitive processes. Two key ideas are the relaxation of best
response to ?quantal response? and the notion of ?limited iterative strategic reasoning?. Models that
assume quantal response assume that players select actions with probability increasing in expected
utility instead of always selecting the action with the largest expected utility [12]. This is expressed
formally by assuming that players select actions, ai , with probability, si , given by the logistic quantal
i (ai ,s?i ))
response function si (ai ) = Pexp(?u
. This function is equivalent to the familiar softmax
0
0 exp(?ui (a ,s?i ))
a
i
i
function with an additional scalar sharpness parameter ? that allows the function to output the best
response as ? ? ? and the uniform distribution as ? ? 0. This relaxation is motivated by the
behavioral notion that if two actions have similar expected utility then they will also have similar
probability of being chosen. Iterative strategic reasoning means that players perform a bounded
2
number of steps of reasoning in deciding on their actions, rather than always converging to fixed
points as in classical game theory. Models incorporating this idea typically assume that every agent
has an integer level. Non-strategic, ?level-0? players choose actions uniformly at random; level-k
players best respond to the level-(k ? 1) players [5] or to a mixture of levels between level-0 and
level-(k ? 1) [3]. The two ideas can be combined, allowing players to quantally respond to lower
level players [18, 22]. Because iterative reasoning models are defined recursively starting from a
base-case of level-0 behavior, their performance can be improved by better modeling the non-strategic
level-0 players. Wright and Leyton-Brown [23] combine quantal response and bounded steps of
reasoning with a model of non-strategic behavior based on hand-crafted game theoretic features. To
the best of our knowledge, this is the current state-of-the-art model.
Deep learning. Deep learning has demonstrated much recent success in solving supervised learning
problems in vision, speech and natural language processing [see, e.g., 9, 15]. By contrast, there have
been relatively few applications of deep learning to multiagent settings. Notable exceptions are Clark
and Storkey [4] and the policy network used in Silver et al. [17]?s work in predicting the actions
of human players in Go. Their approach is similar in spirit to ours: they map from a description
of the Go board at every move to the choices made by human players, while we perform the same
mapping from a normal form game. The setting differs in that Go is a single, sequential, zero-sum
game with a far larger, but fixed, action space, which requires an architecture tailored for pattern
recognition on the Go board. In contrast, we focus on constructing an architecture that generalizes
across general-sum, normal form games.
We enforce invariance to the size of the network?s input. Fully convolutional networks [11] achieve
invariance to the image size in a similar by manner replacing all fully connected layers with convolutions. In its architectural design, our model is mathematically similar to Lin et al. [10]?s Network in
Network model, though we derived our architecture independently using game theoretic invariances.
We discuss the relationships between the two models at the end of Section 3.
3
Modeling Human Strategic Behavior with Deep Networks
A natural starting point in applying deep networks to a new domain is testing the performance of a
regular feed-forward neural network. To apply such a model to a normal form game, we need to flatten
the utility values into a single vector of length mn + nm and learn a function that maps to the msimplex output via multiple hidden layers. Feed-forward networks can?t handle size-invariant inputs,
but we can temporarily set that problem aside by restricting ourselves to games with a fixed input
size. We experimented with that approach and found that feed-forward networks often generalized
poorly as the network overfitted the training data (see Section 2 of the supplementary material for
experimental evidence). One way of combating overfitting is to encourage invariance through data
augmentation: for example, one may augment a dataset of images by rotating, shifting and scaling
the images slightly. In games, a natural simplifying assumption is that players are indifferent to the
order in which actions are presented, implying invariance to permutations of the payoff matrix.1
Incorporating this assumption by randomly permuting rows or columns of the payoff matrix at every
epoch of training dramatically improved the generalization performance of a feed-forward network in
our experiments, but the network is still limited to games of the size that it was trained on.
Our approach is to enforce this invariance in the model architecture rather than through data augmentation. We then add further flexibility using novel ?pooling units? and by incorporating iterative
response ideas inspired by behavioral game theory models. The result is a model that is flexible
enough to represent the all the models surveyed in Wright and Leyton-Brown [22, 23]?and a huge
space of novel models as well?and which can be identified automatically. The model is also invariant to the size of the input payoff matrix, differentiable end to end and trainable using standard
gradient-based optimization.
The model has two parts: feature layers and action response layers; see Figure 2 for a graphical
overview. The feature layers take the row and column player?s normalized utility matrices U(r) and
U(c) ? Rm?n as input, where the row player has m actions and the column player has n actions.
(r)
The feature layers consist of multiple levels of hidden matrix units, Hi,j ? Rm?n , each of which
calculates a weighted sum of the units below and applies a non-linear activation function. Each
1
We thus ignore salience effects that could arise from action ordering; we plan to explore this in future work.
3
..
.
H1,j ?
H2,j ?
(r)
ark
ark?1
(c)
(c)
H2,j
y
..
.
Action Response Layers
H2,j ?
H1,j ?
(c)
H1,j
(r)
H2,j ?
ark?1
H1,j ?
(c)
..
.
ar1
U(c) ?
..
.
...
(r)
f1
ar1
(c)
H2,1
(c)
U(c) ?
ar0
...
H2,1 ?
H1,1 ?
(c)
H1,1
...
(r)
fj
ar0
...
Feature Layers
H1,1 ?
U(c)
..
.
H2,1 ?
(r)
H2,j
f1
H2,j ?
H1,j ?
(r)
...
Softmax Units
..
.
H1,j
Input Units
(r)
H2,1
H2,1 ?
U
U(r) ?
(r)
H1,1 ?
(r)
H1,1
U(r) ?
Output
H2,1 ?
H1,1 ?
...
fj
Figure 2: A schematic representation of our architecture. The feature layers consist of hidden matrix
units (orange), each of which use pooling units to output row- and column-preserving aggregates
(blue and purple) before being reduced to distributions over actions in the softmax units (red). Iterative
response is modeled using the action response layers (green) and the final output, y, is a weighted
sum of the row player?s action response layers.
layer of hidden units is followed by pooling units, which output aggregated versions of the hidden
matrices to be used by the following layer. After multiple layers, the matrices are aggregated to
(r)
vectors and normalized to a distribution over actions, fi ? ?m in softmax units. We refer to these
distributions as features because they encode higher-level representations of the input matrices that
may be combined to construct the output distribution.
As discussed earlier, iterative strategic reasoning is an important phenomenon in human decision
making; we thus want to allow our models the option of incorporating such reasoning. To do so, we
compute features for the column player in the same manner by applying the feature layers to the
(c)
transpose of the input matrices, which outputs fi ? ?n . Each action response layer for a given
player then takes the opposite player?s preceding action response layers as input and uses them to
construct distributions over the respective players? outputs. The final output y ? ?m is a weighted
sum of all action response layers? outputs.
Invariance-Preserving Hidden Units We build a model that ties parameters in our network by
encoding the assumption that players reason about each action identically. This assumption implies
that the row player applies the same function to each row of a given game?s utility matrices. Thus, in
a normal form game represented by the utility matrices U(r) and U(c) , the weights associated with
each row of U(r) and U(c) must be the same. Similarly, the corresponding assumption about the
column player implies that the weights associated with each column of U(r) and U(c) must also be
the same. We can satisfy both assumptions by applying a single scalar weight to each of the utility
matrices, computing wr U(r) + wc U(c) . This idea can be generalized as in a standard feed-forward
network to allow us to fit more complex functions. A hidden matrix unit taking all the preceding
hidden matrix units as input can be calculated as
?
?
X
Hl,i = ? ?
wl,i,j Hl?1,j + bl,i ? Hl,i ? Rm?n ,
j
where Hl,i is the ith hidden unit matrix for layer l, wl,i,j is the j th scalar weight, bl,i is a scalar bias
variable, and ? is a non-linear activation function applied element-wise. Notice that, as in a traditional
feed-forward neural network, the output of each hidden unit is simply a nonlinear transformation of
the weighted sum of the preceding layer?s hidden units. Our architecture differs by maintaining a
4
Hidden Layer 2
Hidden Layer 1
Input Units
Figure 3: Left: Without pooling units, each element of every hidden matrix unit depends only on the
Figure
3: Left: Without
pooling
eachfrom
element
of every
hidden
unit depends
only
on the in
corresponding
elements
in units,
theunits,
units
the layer
e.g.,matrix
thematrix
middle
element
highlighted
Figure
3: Left:
Without
pooling
each
element
ofbelow;
every
hidden
unit
depends
onlyinon the
corresponding
elements
inthe
thevalue
unitsof
from
the
layer below;
e.g., the highlighted
middle element
highlighted
red
depends
only
on
the
elements
of
the
matrices
in
orange.
Right:
With
corresponding
elements
in the
units
from the
below; highlighted
e.g., the middle
element
highlighted
in
red
depends
only
on
the value
of
the
elements
oflayer
the
matrices
in
orange.
Right:
With
pooling
units
at
each
layer
in
the
network,
each
element
of
every
hidden
matrix
unit
depends
both
on
red
depends
only
on
the
value
of
the
elements
of
the
matrices
highlighted
in
orange.
Right:
With
pooling
at each layer
in the network,
each
element
every
hidden
matrix
uniteach
depends
both
on
theunits
corresponding
elements
in network,
the units
below
andofthe
quantity
from
row
and column.
pooling
units at each
layer
the
each
ofpooled
every
hidden
unitand
depends
both on
the
corresponding
elements
ininthe
units below
andelement
the row
pooled
quantity
frommatrix
each row
column.
E.g.,
the
light
blue
and
purple
blocks
represent
the
and
column-wise
aggregates
corresponding
to
the the
corresponding
elements
in therepresent
units belowrow
andand
thecolumn-wise
pooled quantity
from each
row and column.
E.g.,
blue and
purpleThe
blocks
aggregates
corresponding
to
theirlight
adjacent
matrices.
dark blue andthe
purple blocks
show which
of these values
the red element
E.g.,
the light
blue andThe
purple
blocks
represent
the rowshow
and which
column-wise
their
adjacent
matrices.
darkelement
blue and
purpleon
blocks
oflight-shaded
theseaggregates
values orange
the corresponding
red cells.
element to
depends
on.
Thus, the red
depends
both the darkand
their adjacent
matrices.
darkdepends
blue andonpurple
blocks
which of these
values
the red element
depends
on. Thus,
the red The
element
both the
dark-show
and light-shaded
orange
cells.
JH: TODO:
addthe
level
depends
on. Thus,
red labels
element depends on both the dark- and light-shaded orange cells.
JH: TODO: add level labels
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
203matrix
Action
Response
The feature
layers
are sufficient
to meet network
our objective
at each
hiddenLayers
unit instead
of a scalar.
Sodescribed
while in above
a traditional
feed-forward
each
Action
Layers
The
feature
layers
described
above
are
sufficient
to
meet
our
objective
204hidden
of Response
mapping
from
the
input
payoff
matrices
to
a
distribution
over
the
row
player?s
actions.
However,
unit maps the previous layer?s vector of outputs into a scalar output, in our architecture
each
of
mapping
from the input
matrices
to a distribution
over the row player?s
actions.
However,
205hidden
this architecture
is notpayoff
capable
of from
explicitly
representing
strategic
reasoning,
which the
unit maps a tensor
of outputs
the previous
layeriterative
into a matrix
output.
this
is nottheory
capable
of explicitly
representing
iterative modeling
strategic reasoning,
which
the
206 architecture
behavioral game
literature
has identified
as an important
ingredient. We
incorporate
behavioral
game theory
has
identified
as anthe
important
modeling
ingredient.
We
incorporate
Tying
in using
thisliterature
way
reduces
the number
of
parameters
in our
network
bysecond?s
a factor beliefs,
of nm,
207
thisweights
ingredient
action
response
layers:
first
player
can
respond
to the
this
ingredient
using
action
response
layers:
the
first
player
can
respond
to
the
second?s
beliefs,
offering
two
benefits.
First,
it
reduces
the
degree
to
which
the
network
is
able
to
overfit;
second
and
208
the second can respond to this response by the first player, and so on to some finite depth. The
the
second
can respond
tointhis
response
by who
the first
and
so on
some finite
Thenotice
importantly,
it makes
the
model invariant
to player,
theatsize
ofdepth
the
input
matrices.
Todepth.
seemodel;
this,
209more
proportion
of players
the
population
iterate
each
is to
a parameter
of
the
thus,
proportion
of
players
the
population
who
at each
depth
is a parameter
thus,layer
each
hidden
unit
maps
from
a tensor
the
k output
matrices of
ofthe
themodel;
preceding
210that
our
architecture
isinalso
able
to learn
not iterate
tocontaining
perform
iterative
reasoning.
m?nnot to perform iterative reasoning.
our
ablein
to R
learn
inarchitecture
Rk?m?n toisaalso
matrix
using k weights. Thus our number of parameters
in
each
Pk
(r)
(r)layer
(r)
211
More on
formally,
we begin
by denoting
of thelayer,
feature
ar0sizes
w(r)
,
P=
0i fiand
i=1
depends
the number
of hidden
units inthe
theoutput
preceding
butlayers
not onas
the
(r)
(r)input
k of the
More
formally,
we
begin
by
denoting
the
output
of
the
feature
layers
as
ar
=
w
f
,
212
where
we
now
include
an
index
(r)
to
refer
to
the
output
of
row
player?s
action
response
layer
0
0i
i
i=1
output(r)matrices. This allows the model to generalize to input sizes that do not appear in training data.
where
an index
(r) to refer
to the layers
outputtoofa row
player?s
actionofresponse
213
arwe now
2 minclude
. Similarly,
by applying
the feature
transposed
version
the inputlayer
matrices,
(r) 0
ar0 2 m . Similarly, by applying the feature (c)
layers ton a transposed version of the input matrices,
214Pooling
the model
a corresponding
ar 2 used
forinthe
column
which
row
unitsalsoAoutputs
limitation
of the weight
our
hiddenplayer
matrix
unitsexpresses
is that itthe
forces
(c) 0tying
n
the
model
alsobeliefs
outputs
a corresponding
forplayer
the
column
player
which
expresses
the functions
rowlayer
215independence
player?s
about
actions
column
will choose.
Eachfrom
action
response
0the2matrices,
between
thewhich
elements
ofartheir
preventing
the network
learning
player?s
beliefsits
about
which
actions
column
player
willof
Each
action
response
layerofwith
216that
composes
output
byofcalculating
the
expected
value
an
internal
representation
of utility
compare
the
values
related the
elements
(see
Figure
3choose.
(left)).
Recall
that each
element
the
composes
its
output
by
calculating
the
expected
value
of
an
internal
representation
of
utility
217matrices
respect
to
its
belief
distribution
over
the
opposition
actions.
For
this
internal
representation
of utility
in our model corresponds to an outcome in a normal form game.P
A natural gamewith
theoretic
respect
belief
distribution
over
theofopposition
actions.
Forto
this
representation
218notion
wetochose
simply
a elements?
weighted
sum
the final
of the
hidden
layers,
because
each
Pcompare
ofitsthe
?related
which
we?d
likelayer
our model
be internal
able
to
is L,i
the, of
setutility
of payoffs
i wi H
we
chose
aeach
weighted
sum
of the
final layer
ofofthe
hidden
layers,
, because
each
i HL,i
219associated
HL,isimply
is with
already
some
non-linear
transformation
original
payoff
matrix,
and
so this
allows
the
iw
of the
players?
actions
that led
tothe
that
outcome.
This
corresponds
to the
row and
H
is already
some
non-linear
of the
the original
original
payoff matrix,
andmatrix
so thisthat
allows
thefrom
220L,i model
express
utility
as a transformation
of
payoffs.
Given the
results
column
oftoeach
matrix
associated
with the particular
element.
model
to sum,
express
as a transformation
of the
original
payoffs.
Given of
thebeliefs
matrixabout
that results
from
221
this
weutility
can compute
expected utility
with
respect
to the vector
the opposition?s
(c)
this
sum,
weof
can
compute
expected
utilitytaking
with
respect
toproduct
the
vector
of beliefs
about
theand
opposition?s
This
observation
motivates
our
pooling
units,
which
allow
information
sharing
by
outputting
ag222
choice
actions,
ar
,
by
simply
the
dot
of
the
weighted
sum
beliefs.
When
j
(c) their
gregated
versions
of
input
matrix
that
may
be
used
by
later
layers
in
the
network
to
learn
to
choice
actions,
simply
takingto
thebeliefs
dot product
of theopposition
weighted more
sum and
Whenlevel
223
weofiterate
thisar
process
responding
about one?s
thanbeliefs.
once, higher
j , by of
compare
the
values
of
cell
in
a
matrix
and
its
rowor
column-wise
aggregates.
we
this
process
of aresponding
to
beliefs
about
one?s
opposition
more
than
once,
higher
level
224 iterate
players
will
respond
toparticular
beliefs, ar
,
for
all
i
less
their
level
and
then
output
a
weighted
combination
i
players
will respond
to using
beliefs,
ari ,weights,
for all i vless
their level
weighted
combination
?? some
?and
?then output
??
225
of these
responses
this
together,
the lthaaction
response
layer
for the
l,i . Putting
max
h
..
max
max
. .for
. the
i i,1v max
i hi,2this. together,
jthhaction
1,j
j h1,j
?
?
of
these
responses
using
some
weights,
.
Putting
the
l
response
layer
226
row player (r) is defined
as
l,i
?
?
? maxi hi,1 maxi hi,2 . . .? ? maxj h2,j maxj h2,j . . .??
row player (r) is defined as?
?
!? , ? !! .
H ? {Hc , Hr } = ?
..
l 1..
k ..
? X
?!!
?
?? (1)
..
X
?
!
.
.
.
?
?
(r)
(r)
(r)
(r)
(c)
(r)
m
?
l 1
k
ar = softmax?X
vl,j
w(r)
HL,i (c)
? arj max, (r)
arl max
2
, l 2 {1,. ....,
K},
X
lmax
(r) l
(r)i hi,1
(r) i l,i
max
h
h
h
.
m
i,2
j
m,j
j
m,j
arl = softmax l
vj=0
wi=1
? arj
, arl 2
, l 2 {1, ..., K},
l,j
l,i HL,i
j=0 as input
i=1 and outputs two matrices constructed from row- and columnA pooling unit takes a matrix
227
where
l
indexes
the
action
response
layer, A lpooling
is a scalar
sharpness
parameter
allows us
to sharpen
preserving pooling operations respectively.
operation
could
be any that
continuous
function
that
(r) layer, (r)
where
l indexes
action response
a scalar
sharpness
parameter
that
allows
us tok sharpen
nthedistribution,
l is
228maps
the
resulting
w
and
v
are
scalar
weights,
H
are
the
row
player?s
hidden
units
L,i
from R ? R. We (r)
use the
because it is a necessary to represent known behavioral
l,i max
l,j
(r) function
thefunctions
resulting(see
distribution, w
vl,j
Hfor
are the row
k hidden
(c) are scalar weights,
L,i details)
th
l,i
of and
the
supplementary
material
andjplayer?s
offered
best units
empirical
229
from the finalSection
hidden 4layer
L, ar
is the output
of the column
player?s
action the
response
layer and
(c) j
th
performance
of thelayer
functions
tested.
Equation
(1)
shows
an
example
of
a
pooling
layer
with
from
the final hidden
L, arwe
is
the
output
of
the
column
player?s
j
action
response
layer
and max
(r)
(r)
j
230
K is the
number
of action
w and
vlj to the simplex and
use
functions
fortotal
some
arbitrary
matrixresponse
H. The layers.
first of We
the constrain
two outputs,
in that
(r) li H(r)
c , is column-preserving
K
total
number
action
response layers.
constrain
wli the
andsharpness
v to theofsimplex
and use and
231 is the
sharpen
the of
output
so thatofWe
we
the distribution
l to
it selects
the maximum
valuedistribution
in each column
H can
and optimize
then stacks
the lj
resulting vector
n-dimensional
to relative
sharpen
the such
output
so that weofcan
optimize
the
sharpness
of the action
distribution
andlayer,
232
weighting
ofdistribution
itsthe
terms
independently.
Weand
build
the
column
response
lvector
m times
that
dimensionality
H
Hcup
are
the
same. player?s
Similarly,
the row-preserving
(c)
(c)
relative
of its
terms
independently.
We
build up
the representation,
column player?sHaction
response layer,
233output
arlweighting
,
similarly,
using
the
column
player?s
internal
utility
,
responding
to
the row
constructs a vector of the max elements in each column and stacks
resulting m-dimensional
(c)
(c)theL,i
arvector
,
similarly,
using
the
column
player?s
internal
utility
representation,
H
,
responding
to
the
row
(r)
n times
such
that Hrlayers,
and Harhave
the same
dimensionality.
the vectors
thatbut
result
L,istack
234 l player?s
action
response
. These
layers
are not used inWe
the
final output
directly
are
(r) l
the
pooling
inaction
this
fashion
that are
the
hidden
from
next directly
layer in but
the are
network
player?s
action
layers,
ar
Theseso
layers
in the
finalthe
output
235from
relied
uponresponse
by operation
subsequent
layers
ofnot
theused
rowunits
player.
l . response
mayupon
take H,
Hc and Hraction
as input.
This allows
these
hidden units to learn functions where each
relied
by subsequent
response
layers of
the later
row player.
element of their output is a function both of the corresponding element from the matrices below as
6 (see Figure 3 (right)).
well as their row and column-preserving maximums
6
5
Softmax output Our model predicts a distribution over the row player?s actions. In order to do this,
we need to map from the hidden matrices in the final layer, HL,i ? Rm?n , of the network onto a
point on the m-simplex, ?m . We achieve this mapping by applying a row-preserving sum to each
of the final layer hidden matrices HL,i (i.e. we sum uniformly over the columns of the matrix as
described above) and then applying a softmax function to convert each of the resulting vectors hi
into normalized distributions. This produces k features fi , each of which is a distribution over the
row player?s m actions:
n
X
(i)
(i)
(i)
(i)
fi = softmax h
where hj =
hj,k for all j ? {1, ..., m}, hj,k ? H(i) i ? {1, ..., k}.
k=1
We can then produce the output of our features, ar0 , using a weighted sum of the individual features,
Pk
P
ar0 = i=1 wi fi , where we optimize wi under simplex constraints, wi ? 0, i wi = 1. Because
each fi is a distribution and our weights wi are points on the simplex, the output of the feature layers
is a mixture of distributions.
Action Response Layers The feature layers described above are sufficient to meet our objective
of mapping from the input payoff matrices to a distribution over the row player?s actions. However,
this architecture is not capable of explicitly representing iterative strategic reasoning, which the
behavioral game theory literature has identified as an important modeling ingredient. We incorporate
this ingredient using action response layers: the first player can respond to the second?s beliefs,
the second can respond to this response by the first player, and so on to some finite depth. The
proportion of players in the population who iterate at each depth is a parameter of the model; thus,
our architecture is also able to learn not to perform iterative reasoning.
Pk
(r)
(r) (r)
More formally, we begin by denoting the output of the feature layers as ar0 = i=1 w0i fi ,
where we now include an index (r) to refer to the output of row player?s action response layer
(r)
ar0 ? ?m . Similarly, by applying the feature layers to a transposed version of the input matrices,
(c)
the model also outputs a corresponding ar0 ? ?n for the column player which expresses the row
player?s beliefs about which actions the column player will choose. Each action response layer
composes its output by calculating the expected value of an internal representation of utility with
respect to its belief distribution over the opposition actions. For this
Pinternal representation of utility
we chose a weighted sum of the final layer of the hidden layers, i wi HL,i , because each HL,i is
already some non-linear transformation of the original payoff matrix, and so this allows the model to
express utility as a transformation of the original payoffs. Given the matrix that results from this sum,
we can compute expected utility with respect to the vector of beliefs about the opposition?s choice of
(c)
actions, arj , by simply taking the dot product of the weighted sum and beliefs. When we iterate
this process of responding to beliefs about one?s opposition more than once, higher-level players will
respond to beliefs, ari , for all i less than their level and then output a weighted combination of these
responses using some weights, vl,i . Putting this together, the lth action response layer for the row
player (r) is defined as
!
!!
k
l?1
X
X
(r)
(r)
(r) (r)
(c)
(r)
arl = softmax ?l
vl,j
wl,i HL,i ? arj
, arl ? ?m , l ? {1, ..., K},
j=0
i=1
where l indexes the action response layer, ?l is a scalar sharpness parameter that allows us to sharpen
(r)
(r)
the resulting distribution, wl,i and vl,j are scalar weights, HL,i are the row player?s k hidden units
(c)
from the final hidden layer L, arj is the output of the column player?s j th action response layer,
(r)
(r)
and K is the total number of action response layers. We constrain wli and vlj to the simplex and
use ?l to sharpen the output distribution so that we can optimize the sharpness of the distribution and
relative weighting of its terms independently. We build up the column player?s action response layer,
(c)
(c)
arl , similarly, using the column player?s internal utility representation, HL,i , responding to the row
(r)
player?s action response layers, arl . These layers are not used in the final output directly but are
relied upon by subsequent action response layers of the row player.
Output Our model?s final output is a weighted sum of the outputs of the action response layers.
This output needs to be a valid distribution over actions. Because each of the action response layers
6
also outputs a distribution over actions, we can achieve this requirement by constraining these weights
to the simplex, thereby ensuring that the output is just a mixture of distributions. The model?s output
PK
(r)
(r)
is thus y = j=1 wj arj , where y and arj ? ?m , and wj ? ?K .
Relation to existing deep models Our model?s functional form has interesting connections with
existing deep model architectures. We discuss two of these here. First, our invariance-preserving
hidden layers can be encoded as MLP Convolution Layers described in Lin et al. [10] with the twochannel 1 ? 1 input xi,j corresponding to the two players? respective payoffs when actions i and j are
played (using patches larger than 1 ? 1 would imply the assumption that local structure is important,
which is inappropriate in our domain; thus, we do not need multiple mlpconv layers). Second, our
pooling units are superficially similar to the pooling units used in convolutional networks. However,
ours differ both in functional form and purpose: we use pooling as a way of sharing information
between cells in the matrices that are processed through our network by taking maximums across
entire rows or columns, while in computer vision, max-pooling units are used to produce invariance
to small translations of the input image by taking maximums in a small local neighborhood.
Representational generality of our architecture Our work aims to extend existing models in
behavioral game theory via deep learning, not to propose an orthogonal approach. Thus, we must
demonstrate that our representation is rich enough to capture models and features that have proven
important in that literature. We omit the details here for space reasons (see the supplementary
material, Section 4), but summarize our findings. Overall, our architecture can express the quantal
cognitive hierarchy [23] and quantal level-k [18] models and as their sharpness tends to infinity, their
best-response equivalents cognitive hierarchy [3] and level-k [5]. Using feature layers we can also
encode all the behavioral features used in Wright and Leyton-Brown [23]. However, our architecture
is not universal; notably, it is unable to express certain features that are likely to be useful, such as
identification of dominated strategies. We plan to explore this in future work.
4
Experiments
Experimental Setup We used a dataset combining observations from 9 human-subject experimental studies conducted by behavioral economists in which subjects were paid to select actions
in normal-form games. Their payment depended on the subject?s actions and the actions of their
unseen opposition who chose an action simultaneously (see Section 1 of the supplementary material
for further details on the experiments and data). We are interested in the model?s ability to predict the
distribution over the row player?s action, rather than just its accuracy in predicting the most likely
action. As a result, we fit models to maximize the likelihood of training data P(D|?) (where ? are the
parameters of the model and D is our dataset) and evaluate them in terms of negative log-likelihood
on the test set.
All the models presented in the experimental section were optimized using Adam [8] with an initial
learning rate of 0.0002, ?1 = 0.9, ?2 = 0.999 and = 10?8 . The models were all regularized using
Dropout with drop probability = 0.2 and L1 regularization with parameter = 0.01. They were all
trained until there was no training set improvement up to a maximum of 25 000 epochs and the
parameters from the iteration with the best training set performance was returned. Our architecture
imposes simplex constraints on the mixture weight parameters. Fortunately, simplex constraints fall
within the class of simple constraints that can be efficiently optimized using the projected gradient
algorithm [7]. The algorithm modifies standard SGD by projecting the relevant parameters onto the
constraint set after each gradient update.
Experimental Results Figure 4 (left) shows a performance comparison between a model built
using our deep learning architecture with only a single action response layer (i.e. no iterative
reasoning; details below) and the previous state of the art, quantal cognitive hierarchy (QCH) with
hand-crafted features (shown as a blue line); for reference we also include the best feature-free model,
QCH with a uniform model of level-0 behavior (shown as a pink line). We refer to an instantiation
of our model with L hidden layers and K action response layers as an N + K layer network. All
instantiations of our model with 3 or more layers significantly improved on both alternatives and thus
represents a new state of the art. Notably, the magnitude of the improvement was considerably larger
than that of adding hand-crafted features to the original QCH model.
7
960
50
20, 20
50, 50
100, 100
980
960
50, 50, 50 100, 100, 100
9000
8500
8000
7500
1000
940
9500
50
20, 20
50, 50
100, 100
50, 50, 50 100, 100, 100
Model Variations (# hidden units)
NLL (Test Loss)
980
1040
1020
9500
50, 50
(no pooling)
50, 50
(pooling)
9000
8500
8000
7500
50, 50
(no pooling)
50, 50
(pooling)
100,100,100 100,100,100
(no pooling) (pooling)
Pooling Comparison (# units)
1020
1000
980
960
940
100,100,100 100,100,100
(no pooling) (pooling)
NLL (Training Loss)
NLL (Test Loss)
1000
940
NLL (Training Loss)
1040
1020
NLL (Training Loss)
NLL (Test Loss)
1040
1
2
3
4
1
2
3
4
9500
9000
8500
8000
7500
Action Response (# layers)
Figure 4: Negative Log Likelihood Performance. The error bars represent 95% confidence intervals
across 10 rounds of 10-fold cross-validation. We compare various models built using our architecture
to QCH Uniform (pink line) and QCH Linear4 (blue line).
Figure 4 (left) considers the effect of varying the number of hidden units and layers on performance
using a single action response layer. Perhaps unsurprisingly, we found that a two layer network with
only a single hidden layer of 50 units performed poorly on both training and test data. Adding a
second hidden layer resulted in test set performance that improved on the previous state of the art.
For these three layer networks (denoted (20, 20), (50, 50) and (100, 100)), performance improved
with more units per layer, but there were diminishing returns to increasing the number of units per
layer beyond 50. The four-layer networks (denoted (50, 50, 50) and (100, 100, 100)) offered further
improvements in training set performance but test set performance diminished as the networks were
able to overfit the data. To test the effect of pooling units on performance, in Figure 4 (center)
we first removed the pooling units from two of the network configurations, keeping the rest of the
hyper-parameters unchanged. The models that did not use pooling layers under fit on the training
data and performed very poorly on the test set. While we were able to improve their performance
by turning off dropout, these unregularized networks didn?t match the training set performance of
the corresponding network configurations that had pooling units (see Section 3 of the supplementary
material). Thus, our final network contained two layers of 50 hidden units and pooling units.
Our next set of experiments committed to this configuration for feature layers and investigated
configurations of action-response layers, varying their number between one and four (i.e., from no
iterative reasoning up to three levels of iterative reasoning; see Figure 4 (right) ). The networks with
more than one action-response layer showed signs of overfitting: performance on the training set
improved steadily as we added AR layers but test set performance suffered. Thus, our final network
used only one action-response layer. We nevertheless remain committed to an architecture that can
capture iterative strategic reasoning; we intend to investigate more effective methods of regularizing
the parameters of action-response layers in future work.
5
Discussion and Conclusions
To design systems that efficiently interact with human players, we need an accurate model of
boundedly rational behavior. We present an architecture for learning such models that significantly
improves upon state-of-the-art performance without needing hand-tuned features developed by
domain experts. Interestingly, while the full architecture can include action response layers to
explicitly incorporate the iterative reasoning process modeled by level-k-style models, our best
performing model did not need them to achieve set a new performance benchmark. This indicates
that the model is performing the mapping from payoffs to distributions over actions in a manner that
is substantially different from previous successful models. Some natural future directions, besides
those already discussed above, are to extend our architecture beyond two-player, unrepeated games to
games with more than two players, as well as to richer interaction environments, such as games in
which the same players interact repeatedly and games of imperfect information.
8
References
[1] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 2013.
[2] C.F. Camerer. Behavioral game theory: Experiments in strategic interaction. Princeton
University Press, 2003.
[3] C.F. Camerer, T.H. Ho, and J.K. Chong. A cognitive hierarchy model of games. Quarterly
Journal of Economics, 119(3), 2004.
[4] C. Clark and A. J. Storkey. Training deep convolutional neural networks to play go. In
Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, 2015.
[5] M. Costa-Gomes, V.P. Crawford, and B. Broseta. Cognition and behavior in normal-form games:
An experimental study. Econometrica, 69(5), 2001.
[6] B. Edelman, M. Ostrovsky, and M. Schwarz. Internet advertising and the generalized secondprice auction: Selling billions of dollars worth of keywords. The American Economic Review,
97(1), 2007.
[7] A. Goldstein. Convex programming in hilbert space. Bulletin of the American Mathematical
Society, 70(5), 09 1964.
[8] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In The International
Conference on Learning Representations (ICLR), 2015.
[9] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 2015.
[10] M. Lin, Q. Chen, and S. Yan. Network in network. In International Conference on Learning
Representations, volume abs/1312.4400. 2014.
[11] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation.
In CVPR, June 2015.
[12] R.D. McKelvey and T.R. Palfrey. Quantal response equilibria for normal form games. GEB, 10
(1), 1995.
[13] P. Milgrom and I. Segal. Deferred-acceptance auctions and radio spectrum reallocation. In
Proceedings of the Fifteenth ACM Conference on Economics and Computation. ACM, 2014.
[14] D. C. Parkes and M. P. Wellman. Economic reasoning and artificial intelligence. Science, 349
(6245), 2015.
[15] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 2015.
[16] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-theoretic, and
Logical Foundations. Cambridge University Press, 2008.
[17] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, Grewe D, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis.
Mastering the game of go with deep neural networks and tree search. Nature, 529, 2016.
[18] D.O. Stahl and P.W. Wilson. Experimental evidence on players? models of other players. JEBO,
25(3), 1994.
[19] M. Tambe. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned.
Cambridge University Press, New York, NY, USA, 1st edition, 2011.
[20] H. R. Varian. Position auctions. International Journal of Industrial Organization, 25, 2007.
[21] J. R. Wright and K. Leyton-Brown. Beyond equilibrium: Predicting human behavior in normalform games. In AAAI. AAAI Press, 2010.
[22] J. R. Wright and K. Leyton-Brown. Behavioral game-theoretic models: A Bayesian framework
for parameter analysis. In Proceedings of the 11th International Conference on Autonomous
Agents and Multiagent Systems (AAMAS-2012), volume 2, pages 921?928, 2012.
[23] J. R. Wright and K. Leyton-Brown. Level-0 meta-models for predicting human behavior in
games. In Proceedings of the Fifteenth ACM Conference on Economics and Computation, pages
857?874, 2014.
[24] R. Yang, C. Kiekintvled, F. Ordonez, M. Tambe, and R. John. Improving resource allocation
strategies against human adversaries in security games: An extended study. Artificial Intelligence
Journal (AIJ), 2013.
9
| 6509 |@word middle:3 version:5 proportion:3 nd:1 seek:1 simplifying:1 paid:1 sgd:1 thereby:1 recursively:1 initial:1 configuration:4 selecting:1 tuned:2 ours:2 offering:1 denoting:3 interestingly:1 outperforms:1 existing:5 current:2 si:2 activation:2 guez:1 must:4 john:1 subsequent:3 drop:1 update:1 aside:1 implying:1 intelligence:3 ith:1 parkes:1 provides:1 preference:1 mathematical:1 constructed:2 edelman:1 combine:2 fitting:1 behavioral:14 manner:3 introduce:1 notably:2 indeed:1 expected:10 behavior:14 planning:1 inspired:1 relying:1 nham:1 automatically:2 company:1 inappropriate:1 increasing:2 aeach:1 spain:1 begin:3 bounded:2 didn:1 tying:2 substantially:1 developed:2 finding:2 transformation:6 hartford:1 every:9 tie:1 rm:4 ostrovsky:1 unit:58 omit:1 appear:2 before:1 engineering:1 local:2 tends:1 depended:1 encoding:2 meet:3 chose:3 emphasis:1 studied:1 relaxing:2 shaded:3 limited:2 tambe:2 range:1 lecun:1 thel:1 testing:1 block:6 differs:2 empirical:1 universal:1 yan:1 significantly:4 shoham:1 flatten:1 confidence:1 regular:1 onto:2 applying:8 optimize:8 equivalent:2 map:7 demonstrated:2 center:1 modifies:1 straightforward:1 economics:4 go:7 starting:2 independently:4 convex:1 sharpness:7 identifying:1 insight:2 rule:1 importantly:1 population:3 handle:1 notion:4 coordinate:1 variation:1 autonomous:1 enhanced:1 rationality:2 play:3 hierarchy:4 programming:1 us:1 storkey:3 element:31 recognition:1 ark:3 predicts:1 kevinlb:1 observed:1 capture:3 wj:2 connected:1 inaction:1 theand:1 ordering:1 removed:1 overfitted:1 valuable:1 mentioned:1 environment:1 ui:1 econometrica:1 trained:2 solving:1 predictive:2 upon:2 selling:1 represented:2 various:1 effective:2 describe:1 artificial:2 aggregate:4 kevin:1 neighborhood:2 outcome:2 hyper:1 kalchbrenner:1 encoded:3 larger:3 supplementary:5 richer:1 cvpr:1 ability:2 unseen:1 highlighted:6 noisy:1 final:14 nll:6 sequence:1 differentiable:1 net:1 propose:1 outputting:1 interaction:4 hire:1 product:2 relevant:1 combining:1 poorly:3 achieve:4 flexibility:1 representational:1 description:2 billion:1 sutskever:1 requirement:1 darrell:1 produce:3 perfect:2 silver:2 adam:2 unrepeated:2 pexp:1 keywords:1 c:1 inwe:1 implies:2 arl:7 differ:1 direction:1 stochastic:1 human:14 material:5 f1:2 generalization:1 mathematically:1 wright:7 normal:15 exp:1 deciding:1 equilibrium:2 mapping:6 predict:3 cognition:1 algorithmic:1 dieleman:1 purpose:1 radio:2 label:2 iw:1 schwarz:1 largest:1 wl:4 weighted:15 always:2 aim:3 rather:4 secondprice:1 hj:3 varying:2 wilson:1 encode:4 derived:3 focus:1 june:1 improvement:3 likelihood:3 indicates:1 contrast:3 industrial:1 defend:1 dollar:1 vl:5 entire:1 typically:3 lj:1 diminishing:1 hidden:43 relation:1 selects:1 interested:1 pixel:1 overall:1 flexible:2 augment:1 denoted:2 plan:2 art:7 softmax:10 orange:7 construct:3 once:3 represents:1 icml:1 future:4 simplex:8 few:2 randomly:1 simultaneously:1 resulted:1 individual:1 familiar:1 maxj:2 ourselves:1 attempt:1 wli:2 ab:1 organization:1 huge:1 mlp:1 acceptance:1 highly:1 investigate:1 indifferent:1 chong:1 deferred:1 mixture:4 wellman:1 light:4 permuting:2 accurate:2 capable:4 encourage:1 necessary:1 respective:2 orthogonal:1 tree:1 rotating:1 varian:1 column:39 modeling:7 earlier:1 ar:8 strategic:18 subset:1 uniform:3 successful:3 conducted:1 considerably:1 chooses:2 combined:2 st:1 international:5 off:1 ofofthe:1 analogously:1 together:3 augmentation:2 aaai:2 nm:2 choose:4 huang:1 cognitive:10 expert:5 american:2 style:1 return:1 li:1 segal:1 pooled:2 satisfy:1 notable:1 explicitly:4 depends:15 later:2 h1:13 jason:1 performed:2 analyze:2 portion:1 red:10 relied:3 participant:5 capability:1 option:1 purple:4 accuracy:2 convolutional:4 who:4 efficiently:2 listing:1 ofthe:1 lesson:1 camerer:2 conceptually:1 generalize:2 identification:1 vincent:1 kavukcuoglu:1 bayesian:1 advertising:2 worth:1 composes:3 simultaneous:1 sharing:2 against:2 frequency:1 steadily:1 james:1 naturally:1 associated:4 transposed:3 rational:5 arwe:2 dataset:4 costa:1 logical:1 recall:1 knowledge:5 dimensionality:2 improves:1 hilbert:1 segmentation:1 graepel:1 carefully:1 sophisticated:1 goldstein:1 feed:8 higher:4 normalform:1 supervised:1 response:65 improved:6 arj:7 though:1 generality:1 just:3 until:1 overfit:2 hand:6 replacing:1 nonlinear:1 logistic:1 ordonez:1 perhaps:1 usa:1 effect:3 ininthe:1 brown:8 contain:1 normalized:3 lillicrap:1 regularization:1 stahl:1 semantic:1 adjacent:3 round:1 game:53 dispensed:1 generalized:3 prominent:1 theoretic:8 demonstrate:1 performs:1 l1:1 fj:2 auction:7 reasoning:21 image:4 wise:5 regularizing:1 novel:5 fi:7 ari:2 common:1 rotation:1 functional:2 palfrey:1 overview:2 volume:2 extend:3 discussed:2 significant:1 refer:5 cambridge:2 ai:3 similarly:8 sharpen:6 language:1 had:1 dot:3 access:1 base:1 add:2 something:1 recent:2 showed:1 perspective:1 scenario:2 schmidhuber:1 certain:1 meta:1 success:2 leach:1 preserving:8 additional:1 fortunately:1 preceding:5 aggregated:2 maximize:1 multiple:4 full:1 needing:1 reduces:2 exceeds:1 match:1 cross:1 long:1 lin:3 calculates:1 prediction:2 converging:1 basic:1 schematic:1 ensuring:1 vision:3 fifteenth:2 iteration:1 represent:6 tailored:1 cell:5 want:1 interval:1 else:1 suffered:1 boundedly:2 rest:1 probably:1 pooling:33 tend:2 subject:3 spirit:1 integer:1 yang:2 constraining:1 bengio:2 enough:2 identically:1 variety:1 iterate:5 fit:3 psychology:2 independence:1 architecture:26 perfectly:2 suboptimal:1 identified:5 opposite:1 idea:5 imperfect:1 economic:2 motivated:1 utility:22 returned:1 speech:1 york:1 weof:1 action:96 repeatedly:1 deep:20 dramatically:1 useful:1 inthe:2 qch:5 involve:2 dark:3 processed:1 reduced:1 generate:1 mckelvey:1 notice:1 designer:1 sign:1 wr:1 per:2 blue:9 express:7 key:3 putting:3 four:2 nevertheless:1 ar1:2 changing:1 relaxation:2 sum:19 convert:1 powerful:1 respond:11 inputlayer:1 reasonable:1 architectural:1 patch:1 decision:2 lanctot:1 scaling:1 dropout:2 layer:111 internet:1 hi:6 followed:1 played:3 courville:1 opposition:8 fold:1 topological:1 constraint:5 infinity:1 constrain:4 dominated:1 wc:1 todo:2 performing:2 relatively:1 department:1 combination:4 pink:2 across:4 slightly:1 remain:1 mastering:1 wi:8 making:2 hl:15 projecting:1 invariant:5 den:1 unregularized:1 resource:2 equation:1 previously:1 payment:1 discus:2 count:1 mechanism:1 milgrom:1 end:3 antonoglou:1 generalizes:1 operation:3 panneershelvam:1 reallocation:2 apply:1 quarterly:1 enforce:2 alternative:2 ho:1 hassabis:1 original:8 assumes:1 responding:5 include:5 graphical:1 maintaining:1 calculating:2 build:4 classical:1 society:1 unchanged:1 bl:2 tensor:2 move:2 objective:3 already:4 quantity:3 added:1 intend:1 strategy:2 traditional:2 gradient:3 iclr:1 unable:1 maddison:1 considers:1 reason:3 assuming:1 economist:1 length:2 besides:1 modeled:3 quantal:8 relationship:1 index:6 schrittwieser:1 setup:1 mostly:1 negative:2 ba:1 design:6 motivates:1 policy:1 perform:5 allowing:1 observation:3 convolution:3 consultant:1 benchmark:1 finite:3 tok:1 payoff:16 hinton:1 extended:1 committed:2 stack:2 arbitrary:3 required:1 trainable:1 connection:1 optimized:2 security:6 engine:1 hfor:1 learned:1 barcelona:1 kingma:1 nip:1 able:8 adversary:3 beyond:4 below:7 pattern:2 bar:1 summarize:1 built:2 including:1 green:1 max:11 belief:18 shifting:1 natural:5 rely:2 force:1 predicting:9 regularized:1 hr:1 scarce:1 turning:1 mn:1 representing:3 geb:1 improve:2 imply:1 grewe:1 columbia:1 crawford:1 prior:3 literature:5 epoch:2 review:2 relative:3 unsurprisingly:1 multiagent:4 fully:3 permutation:3 loss:6 hcup:1 interesting:1 limitation:2 allocation:2 proven:1 ingredient:6 clark:3 revenue:1 foundation:1 h2:14 validation:1 shelhamer:1 agent:2 degree:1 sufficient:3 offered:2 imposes:1 principle:1 translation:2 row:45 course:1 lmax:1 surprisingly:1 free:2 transpose:1 keeping:1 salience:1 bias:2 allow:4 jh:2 aij:1 wide:1 fall:1 taking:4 bulletin:1 combating:1 benefit:1 van:1 dimension:1 calculated:1 depth:4 valid:1 superficially:1 rich:1 preventing:1 forward:8 made:1 w0i:1 projected:1 sifre:1 far:1 transaction:1 ignore:1 overfitting:2 andthe:2 instantiation:2 gomes:1 themodel:1 xi:1 spectrum:4 search:2 iterative:17 continuous:1 learn:7 nature:2 ca:1 improving:1 interact:2 hc:2 complex:1 investigated:1 constructing:1 domain:5 vj:1 did:3 pk:4 arise:1 edition:1 tothe:1 aamas:1 crafted:4 board:3 fashion:1 deployed:1 ny:1 surveyed:1 position:1 weighting:2 british:1 rk:1 maxi:2 ton:1 experimented:1 evidence:2 maximizers:1 incorporating:5 consist:2 restricting:1 sequential:1 adding:2 magnitude:1 chen:1 led:1 simply:6 likely:3 explore:2 expressed:1 personnel:1 contained:1 temporarily:1 scalar:14 applies:2 driessche:1 ubc:1 leyton:8 corresponds:2 relies:1 acm:3 lth:1 careful:1 replace:1 change:2 diminished:1 operates:1 uniformly:2 vlj:2 called:1 total:2 invariance:11 experimental:9 stake:1 player:92 exception:1 select:3 formally:4 internal:8 latter:1 incorporate:4 evaluate:2 princeton:1 tested:1 phenomenon:1 |
6,091 | 651 | Memory-based Reinforcement Learning: Efficient
Computation with Prioritized Sweeping
Andrew W. Moore
awm@ai.mit.edu
NE43-759 MIT AI Lab.
545 Technology Square
Cambridge MA 02139
Christopher G. At:iteson
cga@ai.mit.edu
NE43-771 MIT AI Lab.
545 Technology Square
Cambridge MA 02139
Abstract
We present a new algorithm, Prioritized Sweeping, for efficient prediction
and control of stochastic Markov systems. Incremental learning methods
such as Temporal Differencing and Q-Iearning have fast real time performance. Classical methods are slower, but more accurate, because they
make full use of the observations. Prioritized Sweeping aims for the best
of both worlds. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of statespace. We compare Prioritized Sweeping with other reinforcement learning
schemes for a number of different stochastic optimal control problems. It
successfully solves large state-space real time problems with which other
methods have difficulty.
1
STOCHASTIC PREDICTION
The paper introduces a memory-based technique, prioritized 6weeping, which is used
both for stochastic prediction and reinforcement learning. A fuller version of this
paper is in preparation [Moore and Atkeson, 1992]. Consider the 500 state Markov
system depicted in Figure 1. The system has sixteen absorbing states, depicted by
white and black circles. The prediction problem is to estimate, for every state, the
long-term probability that it will terminate in a white, rather than black, circle.
The data available to the learner is a sequence of observed state transitions. Let us
consider two existing methods along with prioritized sweeping.
263
264
Moore and Atkeson
Figure 1:
A 500-state
Markov system.
Each
state has a random number
(mean 5) of random successors chosen within the
local neighborhood.
Temporal Differencing (TD) is an elegant incremental algorithm [Sutton, 1988]
which has recently had success with a very large problem [Tesauro, 1991].
The classical method proceeds by building a maximum likelihood model of the
state transitions. qij (the transition probability from i to i) is estimated by
ANum ber of observations i ~ i
llij
Number of occasions in state i
=
(1)
After t + 1 observations the new absorption probability estimates are computed to
satisfy, for each terminal state k, the linear system
(2)
jeSucC8(i)nNONTERMS
where the i'iJ& [t]'s are the absorption probabilities we are trying to learn, where
succs(i) is the set of all states which have been observed as immediate successors
of i and NONTERMS is the set of non-terminal states.
This set of equations is solved after each transition is observed. It is solved using
Gauss-Seidel-an iterative method. What initial estimates should be used to start
the iteration? An excellent answer is to use the previous absorption probability
estimates i'iJ& [t].
Prioritized sweeping is designed to combine the advantages of the classical
method with the advantages of TD. It is described in the next section, but let us
first examine performance on the original 500-state example of Figure 1. Figure 2
shows the result. TD certainly learns: by 100,000 observations it is estimating the
terminal-white probability to an RMS accuracy of 0.1. However, the performance
of the classical method appears considerably better than TD: the same error of 0.1
is obtained after only 3000 observations.
Figure 3 indicates why temporal differencing may nevertheless often be more useful.
TD requires far less computation per observation, and so can obtain more data in
real time. Thus, after 300 seconds, TD has had 250,000 observations and is down
Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
Mean ? Standard Dev'n
After 100,000 observations
After 300 seconds
TD
0.40 ? 0.077
0.079 ? 0.067
Classical
0.024 ? 0.0063
0.23 ? 0.038
Pri. Sweep
0.024 ? 0.0061
0.021 ? 0.0080
Table 1: RMS prediction error: mean and standard deviation for ten experiments.
..
1D
1.5
us
u 1.35
J:I
]
~
Pri.Sweep
------ ..
u 0.35
..............
~
J:I
...... ,
= 0.3
1
'tI
,,
"
0.25
\
u 1.2
\
IS.
\
\
~
\
=c
...5
o
""
0.15
o
o
100 300 le3 3e3 ... 3e. ..5
No. observations (log scale)
Figure 2: RMS prediction against observation during three learning algarithms.
"-
1.15
fI.l o.t
\
o
Classical - - - -
0.4
-.. ..
1.3
1.25
1.2
Do 0.15
rIJ 0.1
=c
..e us
.
Pri. Sweep
------
1D
0.5
Classical
= I..
oS
~
------
0.3
1
3
10
30
100 300
Real time, seconds (log scale)
Figure 3: RMS prediction against real
time
to an error of 0.05, whereas even after 300 seconds the classical method has only
1000 observations and a much cruder estimate.
In the same figures we see the motivation behind prioritized sweeping. Its perfor-
mance relative to observations is almost as good as the classical method, while its
performance relative to real time is even better than TD.
The graphs in Figures 2 and 3 were based on only one learning experiment each.
Ten further experiments, each with a different random 500 state problem, were run.
The results are given in Table 1.
2
PRIORITIZED SWEEPING
A longer paper [Moore and Atkeson, 1992] will describe the algorithm in detail.
Here we summarize the essential insights, and then simply present the algorithm
in Figure 4. The closest relation to prioritized sweeping is the search scheduling
technique of the A* algorithm [Nilsson, 1971]. Closely related research is being
performed by [Peng and Williams, 1992] into a similar algorithm to prioritized
sweeping, which they call Dyna-Q-queue .
? The memory requirements oflearning a N, x N, matrix, where N, is the number
of states, may initially appear prohibitive, especially since we intend to operate
with more than 10,000 states. However, we need only allocate memory for the
265
266
Moore and Atkeson
1. Promote state
queue.
irecent
(the source of the most recent transition) to top of priority
2. While we are allowed further processing and priority queue not empty
2.1 Remove the top state from the priority queue. Call it i
2.2 a max = 0
2.3 for each Ie E TERMS
Pnew
=
qil&
L
+
j
a:=
q,j ijl&
esuccs(i)nNONTERMS
I Pnew -
i"il&
I
iil& : = Pnew
amax := max(amax , a)
2.4 for each i' E preds(i)
P := qi1iamax
if i' not on queue, or P exceeds the current priority
of i', then promote i' to new priority P.
Figure 4: The prioritized sweeping algorithm. This sequence of operations is executed each time a transition is observed.
experiences the system actually has, and for a wide class of physical systems
there is not enough time in the lifetime of the physical system to run out of
memory.
? We keep a record of all predecessors of each state. When the eventual absorption probabilities of a state are updated, its predecessors are alerted that they
may need to change. A priority value is assigned to each predecessor according
to how large this change could be possibly be, and it is placed in a priority
queue.
? After each real-world observation i ~ j, the transition probability estimate
qij is updated along with the probabilities of transition to all other previously
observed successors of i. Then state i is promoted to the top of the priority
queue so that its absorption probabilities are updated immediately. Next, we
continue to process further states from the top of the queue. Each state that
is processed may result in the addition or promotion of its predecessors within
the queue. This loop continues for a preset number of processing steps or until
the queue empties.
If a real world observation is interesting, all its predecessors and their earlier ancestors quickly find themselves near the top of the priority queue. On the other
hand, if the real world observation is unsurprising, then the processing immediately
proceeds to other, more important areas of state-space which had been under consideration on the previous time step. These other areas may be different nom those
in which the system currently finds itself.
Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
15 States
117 States
>
Dyna-PI+
Dyna-OPT
PriSweep
400
300
150
500
900
1200
605 States
>
36000
21000
6000
4528 States
>
> 500000
245000
59000
Table 2: Number of observations before 98% of decisions were subsequently optimal.
Dyna and Prioritized Sweeping were each allowed to process ten states per real-world
observation.
3
LEARNING CONTROL FROM REINFORCEMENT
Prioritized sweeping is also directly applicable to stochastic control problems. Remembering all previous transitions allows an additional advantage for controlexploration can be guided towards areas of state space in which we predict we are
ignorant. This is achieved using the exploration philosophy of [Kaelbling, 1990]
and [Sutton, 1990]: optimism in the face of uncertainty.
4
RESULTS
Results of some maze problems of significant size are shown in Table 2. Each
state has four actions: one for each direction. Blocked actions do not move. One
goal state (the star in subsequent figures) gives 100 units of reward, all others give
no reward, and there is a discount factor of 0.99. Trials start in the bottom left
corner. The system is reset to the start state whenever the goal state has been
visited ten times since the last reset. The reset is outside the learning task: it is
not observed as a state transition. Prioritized sweeping is tested against a highly
tuned Q-learner [Watkins, 1989] and a highly tuned Dyna [Sutton, 1990]. The
optimistic experimentation method (described in the full paper) can be applied to
other algorithms, and so the results of optimistic Dyna-learning is also included.
The same mazes were also run as a stochastic problem in which requested actions
were randomly corrupted 50% of the time. The gap between Dyna-OPT and Prioritized Sweeping was reduced in these cases. For example, on a stochastic 4528-state
maze Dyna-OPT took 310,000 steps and Prioritized sweeping took 200,000.
We also have results for a five state bench-mark problem described in [Sato et al.,
1988, Barto and Singh, 1990]. Convergence time is reduced by a factor of twenty
over the incremental methods.
267
268
Moore and Atkeson
Q
Dyna-PI+
Optimistic Dyna
Prioritized Sweeping
Experiences to converge
never
never
55,000
14,000
Real time to converge
1500 secs
330 secs
Table 3: Performance on the deterministic rod-in-maze task. Both Dynas and
prioritized sweeping were allowed 100 backups per experience.
Finally we consider a task with a 3-d state space quantized into 15,000 potential
discrete states (not all reachable). The task is shown in Figure 5 and involves finding
the shortest path for a rod which can be rotated and translated.
Q, Dyna-PI+, Optimistic Dyna and prioritized sweeping were all tested. The results
are in Table 3. Q and Dyna-PI+ did not even travel a quarter of the way to the
goal, let alone discover an optimal path, within 200,000 experiences. Optimistic
Dyna and prioritized sweeping both eventually converged, with the latter requiring
a third the experiences and a fifth the real time.
When 2000 backups per experience were permitted, instead of 100, then both optimistic Dyna and prioritized sweeping required fewer experiences to converge. Optimistic Dyna took 21,000 experiences instead of 55,000 but took 2,900 secondsalmost twice the real time. Prioritized sweeping took 13,500 instead of 14,000
experiences-very little improvement, but it used no extra time. This indicates
that for prioritized sweeping, 100 backups per observation is sufficient to make
almost complete use of its observations, so that all the long term reward (J,) estimates are very close to the estimates which would be globally consistent with the
transition probability estimates ('if';). Thus, we conjecture that even full dynamic
programming after each experience (which would take days of real time) would do
little better.
Figme 5: A three-DOF
problem, and the optimal
solution path.
Memory-based Reinforcement Learning: Efficient Computation with Prioritized Sweeping
f---
rr
[3
f---
1--,.Y
f--
I-
l-
"
....L.
I
Figure 6: Dotted states are all those visited when the Manhattan heuristic was
used
5
Figure 7: A kd.-tree tessellation of state
space of a sparse mase
DISCUSSION
Our investigation shows that Prioritized Sweeping can solve large state-space realtime problems with which other methods have difficulty. An important extension
allows heuristics to constrain exploration decisions. For example, in finding an
optimal path through a maze, many states need not be considered at all. Figure 6
shows the areas explored using a Manhattan heuristic when finding the optimal
path from the lower left to the center. For some tasks we may be even satisfied to
cease exploration when we have obtained a solution known to be, say, within 50%
of the optimal solution. This can be achieved by using a heuristic which lies: it tells
us that the best possible reward-ta-go is that of a path which is twice the length of
the true shortest possible path.
Furthermore, another promising avenue is prioritized sweeping in conjunction with
kd-tree tessellations of state space to concentrate prioritizing sweeping on the important regions [Moore, 1991]. Other benefits of the memory-based approach, described in [Moore, 1992], allow us to control forgetting in changing environments
and automatic scaling of state variables.
Acknowledgements
Thanks to Mary Soon Lee, Satinder Singh and Rich Sutton for useful comments
on an early draft. Andrew W. Moore is supported by a Postdoctoral Fellowship
from SERC/NATO. Support was also provided under Air Force Office of Scientific
Research grant AFOSR-89-0500, an Alfred P. Sloan Fellowship, the W. M. Keck
Foundation Associate Professorship in Biomedical Engineering, Siemens Corporation, and a National Science Foundation Presidential Young Investigator Award to
Christopher G. Atkeson.
269
270
Moore and Atkeson
References
[Barto and Singh, 1990] A. G. Barto and S. P. Singh. On the Computational Economics of Reinforcement Learning. In D. S. Touretzky, editor, Connectioni.t
Mode": Proceeding. of the 1990 Summer School. Morgan Kaufmann, 1990.
[Kaelbling, 1990] L. P. Kaelbling. Learning in Embedded Systems. PhD. Thesisj
Technical Report No. TR-90-04, Stanford University, Department of Computer
Science, June 1990.
[Moore and Atkeson, 1992] A. V;! . !-Ioore and C. G. Atkeson. Memory-based Reinforcement Learning: CO:lverging with Less Data and Less Real Time. In preparation, 1992.
[Moore, 1991] A. W. Moore. Variable Resolution Dynamic Programming: Efficiently Learning Action Maps in Multivariate Real-valued State-spaces. In
L. Birnbaum and G. Collins, editors, Machine Learning: Proceeding. of the Eighth
International Work.thop. Morgan Kaufman, June 1991.
[Moore, 1992] A. W. Moore. Fast, Robust Adaptive Control by Learning only
Forward Models. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors,
Advance. in Neural Information Proceuing Sydem6 4. Morgan Kaufmann, April
1992.
[Nilsson, 1971] N. J. Nilsson. Problem-60lving Method6 in Artificial Intelligence.
McGraw Hill, 1971.
[Peng and Williams, 1992] J. Peng and R. J. Williams. Efficient Search Control in
Dyna. College of Computer Science, Northeastern University, March 1992.
[Sato et al., 1988] M. Sato, K. Abe, and H. Takeda. Learning Control of Finite
Markov Chains with an Explicit Trade-off Between Estimation and Control. IEEE
Tran6. on SY6tem6, Man, and Cybernetic.t, 18{5}:667-684, 1988.
[Sutton, 1988] R. S. Sutton. Learning to Predict by the Methods of Temporal
Differences. Machine Learning, 3:9-44, 1988.
[Sutton, 1990] R. S. Sutton. Integrated Architecture for Learning, Planning, and
Reacting Based on Approximating Dynamic Programming. In Proceeding. of
the 7th International Conference on Machine Learning. Morgan Kaufman, June
1990.
[Tesauro, 1991] G. J. Tesauro. Practical Issues in Temporal Difference Learning.
RC 17223 (76307), IBM T. J. Watson Research Center, NY, 1991.
[Watkins, 1989] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD. Thesis,
King's College, University of Cambridge, May 1989.
| 651 |@word trial:1 version:1 tr:1 initial:1 tuned:2 existing:1 current:1 subsequent:1 remove:1 designed:1 alone:1 intelligence:1 prohibitive:1 fewer:1 record:1 draft:1 quantized:1 nom:1 five:1 rc:1 along:2 predecessor:5 qij:2 combine:1 peng:3 forgetting:1 themselves:1 examine:1 planning:1 terminal:3 globally:1 td:8 little:2 provided:1 estimating:1 discover:1 what:1 kaufman:2 finding:3 corporation:1 temporal:5 every:1 ti:1 iearning:1 control:9 unit:1 grant:1 appear:1 before:1 engineering:1 local:1 sutton:8 proceuing:1 reacting:1 path:7 black:2 twice:2 co:1 professorship:1 practical:1 lippman:1 area:4 close:1 scheduling:1 deterministic:1 map:1 center:2 williams:3 go:1 economics:1 resolution:1 immediately:2 insight:1 amax:2 updated:3 programming:4 us:1 associate:1 continues:1 qil:1 observed:6 bottom:1 solved:2 rij:1 region:1 trade:1 ne43:2 environment:1 tran6:1 reward:5 dynamic:4 singh:4 learner:2 translated:1 fast:2 describe:1 artificial:1 tell:1 neighborhood:1 outside:1 dof:1 heuristic:4 stanford:1 solve:1 valued:1 say:1 tested:2 presidential:1 itself:1 sequence:2 advantage:3 rr:1 took:5 reset:3 loop:1 takeda:1 convergence:1 empty:2 requirement:1 keck:1 incremental:3 rotated:1 andrew:2 ij:2 school:1 solves:1 involves:1 direction:1 guided:1 concentrate:1 closely:1 stochastic:7 subsequently:1 awm:1 exploration:4 successor:3 investigation:1 opt:3 absorption:5 extension:1 considered:1 iil:1 predict:2 early:1 estimation:1 travel:1 applicable:1 currently:1 visited:2 successfully:1 mit:4 promotion:1 aim:1 rather:1 barto:3 office:1 conjunction:1 cruder:1 june:3 improvement:1 likelihood:1 indicates:2 integrated:1 initially:1 relation:1 ancestor:1 issue:1 fuller:1 never:2 promote:2 others:1 report:1 randomly:1 national:1 delayed:1 ignorant:1 highly:2 certainly:1 introduces:1 behind:1 chain:1 accurate:1 experience:11 tree:2 circle:2 earlier:1 dev:1 tessellation:2 kaelbling:3 oflearning:1 deviation:1 unsurprising:1 answer:1 corrupted:1 considerably:1 thanks:1 international:2 ie:1 lee:1 off:1 quickly:1 moody:1 thesis:1 satisfied:1 prioritize:1 possibly:1 priority:9 corner:1 preparation:2 potential:1 star:1 sec:2 satisfy:1 sloan:1 performed:1 lab:2 optimistic:7 sy6tem6:1 start:3 square:2 il:1 accuracy:1 figme:1 air:1 kaufmann:2 efficiently:1 converged:1 touretzky:1 whenever:1 against:3 actually:1 appears:1 ta:1 day:1 permitted:1 april:1 lifetime:1 furthermore:1 biomedical:1 until:1 hand:1 christopher:2 o:1 mode:1 scientific:1 mary:1 building:1 requiring:1 true:1 assigned:1 moore:15 pri:3 white:3 during:1 occasion:1 trying:1 ijl:1 hill:1 complete:1 consideration:1 recently:1 fi:1 absorbing:1 quarter:1 physical:2 preds:1 significant:1 blocked:1 cambridge:3 ai:4 automatic:1 had:3 reachable:1 longer:1 closest:1 multivariate:1 recent:1 tesauro:3 success:1 continue:1 watson:1 morgan:4 additional:1 remembering:1 promoted:1 converge:3 shortest:2 full:3 seidel:1 exceeds:1 technical:1 long:2 award:1 prediction:7 iteration:1 achieved:2 whereas:1 addition:1 fellowship:2 source:1 extra:1 operate:1 comment:1 elegant:1 connectioni:1 call:2 near:1 enough:1 architecture:1 avenue:1 cybernetic:1 rod:2 optimism:1 rms:4 allocate:1 queue:11 e3:1 action:4 useful:2 cga:1 discount:1 ten:4 processed:1 reduced:2 dotted:1 estimated:1 per:5 alfred:1 discrete:1 four:1 pnew:3 nevertheless:1 changing:1 birnbaum:1 graph:1 run:3 uncertainty:1 almost:2 realtime:1 decision:2 scaling:1 summer:1 sato:3 constrain:1 conjecture:1 department:1 according:1 march:1 kd:2 nilsson:3 equation:1 previously:1 eventually:1 dyna:17 available:1 mance:1 operation:1 experimentation:1 slower:1 original:1 top:5 serc:1 especially:1 approximating:1 classical:9 sweep:4 move:1 intend:1 length:1 differencing:3 executed:1 twenty:1 observation:19 markov:4 finite:1 immediate:1 sweeping:29 abe:1 prioritizing:1 required:1 hanson:1 proceeds:2 eighth:1 summarize:1 max:2 memory:10 difficulty:2 force:1 scheme:1 technology:2 mase:1 acknowledgement:1 relative:2 manhattan:2 afosr:1 embedded:1 interesting:1 sixteen:1 foundation:2 sufficient:1 consistent:1 editor:3 pi:4 ibm:1 placed:1 last:1 soon:1 supported:1 guide:1 allow:1 ber:1 wide:1 face:1 fifth:1 sparse:1 benefit:1 world:5 transition:11 maze:5 rich:1 forward:1 reinforcement:9 adaptive:1 atkeson:9 far:1 alerted:1 nato:1 mcgraw:1 keep:1 satinder:1 llij:1 postdoctoral:1 search:2 iterative:1 why:1 table:6 promising:1 terminate:1 learn:1 robust:1 requested:1 excellent:1 did:1 motivation:1 backup:3 allowed:3 ny:1 explicit:1 lie:1 watkins:3 third:1 learns:1 young:1 northeastern:1 down:1 explored:1 cease:1 essential:1 phd:2 gap:1 depicted:2 simply:1 ma:2 goal:3 king:1 towards:1 prioritized:29 eventual:1 man:1 change:2 included:1 preset:1 gauss:1 siemens:1 college:2 perfor:1 mark:1 support:1 latter:1 collins:1 philosophy:1 investigator:1 statespace:1 bench:1 |
6,092 | 6,510 | Depth from a Single Image by Harmonizing
Overcomplete Local Network Predictions
Ayan Chakrabarti
TTI-Chicago
Chicago, IL
ayanc@ttic.edu
Jingyu Shao
Dept. of Statistics, UCLA?
Los Angeles, CA
shaojy15@ucla.edu
Gregory Shakhnarovich
TTI-Chicago
Chicago, IL
gregory@ttic.edu
Abstract
A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth
estimation by using a neural network to produce a mid-level representation that
summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders,
orientations and scales. However, instead of a single estimate for each derivative,
the network outputs probability distributions that allow it to express confidence
about some coefficients, and ambiguity about others. Scene depth is then estimated
by harmonizing this overcomplete set of network predictions, using a globalization
procedure that finds a single consistent depth map that best matches all the local
derivative distributions. We demonstrate the efficacy of this approach through
evaluation on the NYU v2 depth data set.
1
Introduction
In this paper, we consider the task of monocular depth estimation?i.e., recovering scene depth from
a single color image. Knowledge of a scene?s three-dimensional (3D) geometry can be useful in
reasoning about its composition, and therefore measurements from depth sensors are often used to
augment image data for inference in many vision, robotics, and graphics tasks. However, the human
visual system can clearly form at least an approximate estimate of depth in the absence of stereo and
parallax cues?e.g., from two-dimensional photographs?and it is desirable to replicate this ability
computationally. Depth information inferred from monocular images can serve as a useful proxy
when explicit depth measurements are unavailable, and be used to refine these measurements where
they are noisy or ambiguous.
The 3D co-ordinates of a surface imaged by a perspective camera are physically ambiguous along a
ray passing through the camera center. However, a natural image often contains multiple cues that can
indicate aspects of the scene?s underlying geometry. For example, the projected scale of a familiar
object of known size indicates how far it is; foreshortening of regular textures provide information
about surface orientation; gradients due to shading indicate both orientation and curvature; strong
edges and corners can correspond to convex or concave depth boundaries; and occluding contours or
the relative position of key landmarks can be used to deduce the coarse geometry of an object or the
whole scene. While a given image may be rich in such geometric cues, it is important to note that
these cues are present in different image regions, and each indicates a different aspect of 3D structure.
We propose a neural network-based approach to monocular depth estimation that explicitly leverages
this intuition. Prior neural methods have largely sought to directly regress to depth [1, 2]?with some
additionally making predictions about smoothness across adjacent regions [4], or predicting relative
?
Part of this work was done while JS was a visiting student at TTI-Chicago.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: To recover depth from a single image, we first use a neural network trained to characterize
local depth structure. This network produces distributions for values of various depth derivatives?of
different orders, at multiple scales and orientations?at every pixel, using global scene features and
those from a centered local image patch (top left). A distributional output allows the network to
determine different derivatives at different locations with different degrees of certainty (right). An
efficient globalization algorithm is then used to produce a single consistent depth map estimate.
depth ordering between pairs of image points [7]. In contrast, we train a neural network with a rich
distributional output space. Our network characterizes various aspects of the local geometric structure
by predicting values of a number of derivatives of the depth map?at various scales, orientations, and
of different orders (including the 0th derivative, i.e., the depth itself)?at every image location.
However, as mentioned above, we expect different image regions to contain cues informative towards
different aspects of surface depth. Therefore, instead of over-committing to a single value, our
network outputs parameterized distributions for each derivative, allowing it to effectively characterize
the ambiguity in its predictions. The full output of our network is then this set of multiple distributions
at each location, characterizing coefficients in effectively an overcomplete representation of the depth
map. To recover the depth map itself, we employ an efficient globalization procedure to find the
single consistent depth map that best agrees with this set of local distributions.
We evaluate our approach on the NYUv2 depth data set [11], and find that it achieves state-of-the-art
performance. Beyond the benefits to the monocular depth estimation task itself, the success of our
approach suggests that our network can serve as a useful way to incorporate monocular cues in more
general depth estimation settings?e.g., when sparse or noisy depth measurements are available. Since
the output of our network is distributional, it can be easily combined with partial depth cues from other
sources within a common globalization framework. Moreover, we expect our general approach?of
learning to predict distributions in an overcomplete respresentation followed by globalization?to
be useful broadly in tasks that involve recovering other kinds of scene value maps that have rich
structure, such as optical or scene flow, surface reflectances, illumination environments, etc.
2
Related Work
Interest in monocular depth estimation dates back to the early days of computer vision, with methods
that reasoned about geometry from cues such as diffuse shading [12], or contours [13, 14]. However,
the last decade has seen accelerated progress on this task [1?10], largely owing to the availability of
cheap consumer depth sensors, and consequently, large amounts of depth data for training learningbased methods. Most recent methods are based on training neural networks to map RGB images
to geometry [1?7]. Eigen et al. [1, 2] set up their network to regress directly to per-pixel depth
values, although they provide deeper supervision to their network by requiring an intermediate layer
2
to explicitly output a coarse depth map. Other methods [3, 4] use conditional random fields (CRFs)
to smooth their neural estimates. Moreover, the network in [4] also learns to predict one aspect of
depth structure, in the form of the CRF?s pairwise potentials.
Some methods are trained to exploit other individual aspects of geometric structure. Wang et al. [6]
train a neural network to output surface normals instead of depth (Eigen et al. [1] do so as well, for a
network separately trained for this task). In a novel approach, Zoran et al. [7] were able to train a
network to predict the relative depth ordering between pairs of points in the image?whether one
surface is behind, in front of, or at the same depth as the other. However, their globalization scheme
to combine these outputs was able to achieve limited accuracy at estimating actual depth, due to the
limited information carried by ordinal pair-wise predictions.
In contrast, our network learns to reason about a more diverse set of structural relationships, by
predicting a large number of coefficients at each location. Note that some prior methods [3, 5] also
regress to coefficients in some basis instead of to depth values directly. However, their motivation
for this is to reduce the complexity of the output space, and use basis sets that have much lower
dimensionality than the depth map itself. Our approach is different?our predictions are distributions
over coefficients in an overcomplete representation, motivated by the expectation that our network
will be able to precisely characterize only a small subset of the total coefficients in our representation.
Our overall approach is similar to, and indeed motivated by, the recent work of Chakrabarti et al. [15],
who proposed estimating a scene map (they considered disparity estimation from stereo images)
by first using local predictors to produce distributional outputs from many overlapping regions at
multiple scales, followed by a globalization step to harmonize these outputs. However, in addition to
the fact that we use a neural network to carry out local inference, our approach is different in that
inference is not based on imposing a restrictive model (such as planarity) on our local outputs. Instead,
we produce independent local distributions for various derivatives of the depth map. Consequently,
our globalization method need not explicitly reason about which local predictions are ?outliers? with
respect to such a model. Moreover, since our coefficients can be related to the global depth map
through convolutions, we are able to use Fourier-domain computations for efficient inference.
3
Proposed Approach
We formulate our problem as that of estimating a scene map y(n) ? R, which encodes point-wise
scene depth, from a single RGB image x(n) ? R3 , where n ? Z2 indexes location on the image
plane. We represent this scene map y(n) in terms of a set of coefficients {wi (n)}K
i=1 at each location
n, corresponding to various spatial derivatives. Specifically, these coefficients are related to the scene
map y(n) through convolution with a bank of derivative filters {ki }K
i=1 , i.e.,
wi (n) = (y ? ki )(n).
(1)
For our task, we define {ki } to be a set of 2D derivative-of-Gaussian filters with standard deviations
2s pixels, for scales s = {1, 2, 3}. We use the zeroth order derivative (i.e., the Gaussian itself),
first order derivatives along eight orientations, as well as second order derivatives?along each of
the orientations, and orthogonal orientations (see Fig. 1 for examples). We also use the impulse
filter which can be interpreted as the zeroth derivative at scale 0, with the corresponding coefficients
wi (n) = y(n)?this gives us a total of K = 64 filters. We normalize the first and second order filters
to be unit norm. The zeroth order filters coefficients typically have higher magnitudes, and in practice,
we find it useful to normalize them as kki k2 = 1/4 to obtain a more balanced representation.
To estimate the scene map y(n), we first use a convolutional neural network to output distributions
for the coefficients p (wi (n)), for every filter i and location n. We choose a parametric form for these
distributions p(?), with the network predicting the corresponding parameters for each coefficient. The
network is trained to produce these distributions for each set of coefficients {wi (n)} by using as input
a local region centered around n in the RGB image x. We then form a single consistent estimate
of y(n) by solving a global optimization problem that maximizes the likelihood of the different
coefficients of y(n) under the distributions provided by our network. We now describe the different
components of our approach (which is summarized in Fig. 1)?the parametric form for our local
coefficient distributions, the architecture of our neural network, and our globalization method.
3
Figure 2: We train a neural network to output distributions for K depth derivatives {wi (n)} at each
location n, using a color image as input. The distributions are parameterized as Gaussian mixtures,
and the network produces the M mixture weights for each coefficient. Our network includes a local
path (green) with a cascade of convolution layers to extract features from a 97 ? 97 patch around each
location n; and a scene path (red) with pre-trained VGG-19 layers to compute a single scene feature
vector. We learn a linear map (with x32 upsampling) from this scene vector to per-location features.
The local and scene features are concatenated and used to generate the final distributions (blue).
3.1
Parameterizing Local Distributions
Our neural network has to output a distribution, rather than a single estimate, for each coefficient
wi (n). We choose Gaussian mixtures as a convenient parametric form for these distributions:
!
M
X
1
|wi (n) ? cji |2
j
,
(2)
p?i (n) ?
pi,n (wi (n)) =
exp ?
2?i2
2??i
j=1
where M is the number of mixture components (64 in our implementation), ?i2 is a common variance
for all components for derivative i, and {cji } the individual component means. A distribution for a
specific coefficient wi (n) can then characterized by our neural network by producing the mixture
P
weights {?
pji (n)}, j p?ji (n) = 1, for each wi (n) from the scene?s RGB image.
Prior to training the network, we fix the means {cji } and variances {?i2 } based on a training set of
ground truth depth maps. We use one-dimensional K-means clustering on sets of training coefficient
values {wi } for each derivative i, and set the means cji in (2) above to the cluster centers. We set ?i2
to the average in-cluster variance?however, since these coefficients have heavy-tailed distributions,
we compute this average only over clusters with more than a minimum number of assignments.
3.2
Neural Network-based Local Predictions
Our method uses a neural network to predict the mixture weights p?ji (n) of the parameterization in (2)
from an input color image. We train our network to output K ? M numbers at each pixel location
n, which we interpret as a set of M -dimensional vectors corresponding to the weights {?
pji (n)}j ,
for each of the K distributions of the coefficients {wi (n)}i . This training is done with respect to a
loss between the predicted p?ji (n), and the best fit of the parametric form in (2) to the ground truth
derivative value wi (n). Specifically, we define qij (n) in terms of the true wi (n) as:
!
X j
|wi (n) ? cji |2
j
qi (n) ? exp ?
,
qi (n) = 1,
(3)
2
2?i
j
and define the training loss L in terms of the KL-divergence between these vectors qij (n) and the
network predictions p?ji (n), weighting the loss for each derivative by its variance ?i2 :
L=?
M
1 X 2X j
?i
qi (n) log p?ji (n) ? log qij (n) ,
N K i,n
j=1
4
(4)
where N is the total number of locations n.
Our network has a fairly high-dimensional output space?corresponding to K ? M numbers, with
(M ? 1) ? K degrees of freedom, at each location n. Its architecture, detailed in Fig. 2, uses a
cascade of seven convolution layers (each with ReLU activations) to extract a 1024-dimensional local
feature vector from each 97 ? 97 local patch in the input image. To further add scene-level semantic
context, we include a separate path that extracts a single 4096-dimensional feature vector from
the entire image?using pre-trained layers (upto pool5) from the VGG-19 [16] network, followed
downsampling with averaging by a factor of two, and a fully connected layer with a ReLU activation
that is trained with dropout. This global vector is used to derive a 64-dimensional vector for each
location n?using a learned layer that generates a feature map at a coarser resolution, that is then
bi-linearly upsampled by a factor of 32 to yield an image-sized map.
The concatenated local and scene-level features are passed through two more hidden layers (with
ReLU activations). The final layer produces the K ? M -vector of mixture weights p?ji (n), applying a
separate softmax to each of the M -dimensional vector {pji (n)}j . All layers in the network are learned
end-to-end, with the VGG-19 layers finetuned with a reduced learning rate factor of 0.1 compared to
the rest of the network. The local path of the network is applied in a ?fully convolutional? way [17]
during training and inference, allowing efficient reuse of computations between overlapping patches.
3.3
Global Scene Map Estimation
Applying our neural network to a given input image produces a dense set of distributions pi,n (wi (n))
for all derivative coefficients at all locations. We combine these to form a single coherent estimate by
finding the scene map y(n) whose coefficients {wi (n)} have high likelihoods under the corresponding
distributions {pi,n (?)}. We do this by optimizing the following objective:
X
?i2 log pi,n ((ki ? y)(n)) ,
(5)
y = arg max
y
i,n
where, like in (4), the log-likelihoods for different derivatives are weighted by their variance ?i2 .
The objective in (5) is a summation over a large (K times image-size) number of non-convex terms,
each of which depends on scene values y(n) at multiple locations n in a local neighborhood?
based on the support of filter ki . Despite the apparent complexity of this objective, we find that
approximate inference using an alternating minimization algorithm, like in [15], works well in
practice. Specifically, we create explicit auxiliary variables wi (n) for the coefficients, and solve the
following modified optimization problem:
?
?
X
?X
1
2
?i2 log pi,n (wi (n))? +
(wi (n) ? (ki ? y)(n)) + R(y). (6)
y = arg min min ? ?
y {wi (n)}
2 i,n
2
i,n
Note that the second term above forces coefficients of y(n) to be equal to the corresponding auxiliary
variables wi (n), as ? ? ?. We iteratively compute (6), by alternating between minimizing the
objective with respect to y(n) and to {wi (n)}, keeping the other fixed, while increasing the value of
? across iterations.
Note that there is also a third regularization term R(y) in (6), which we define as
XX
R(y) =
k(?r ? y)(n)k2 ,
r
(7)
n
using 3 ? 3 Laplacian filters, at four orientations, for {?r }. In practice, this term only affects the
computation of y(n) in the initial iterations when the value of ? is small, and in later iterations is
dominated by the values of wi (n). However, we find that adding this regularization allows us to
increase the value of ? faster, and therefore converge in fewer iterations.
Each step of our alternating minimization can be carried out efficiently. When y(n) fixed, the
objective in (6) can be minimized with respect to each coefficient wi (n) independently as:
wi (n) = arg min ? log pi,n (w) +
w
5
?
(w ? w
?i (n))2 ,
2?i2
(8)
where w
?i (n) = (ki ? y)(n) is the corresponding derivative of the current estimate of y(n). Since
pi,n (?) is a mixture of Gaussians, the objective in (8) can also be interpreted as the (scaled) negative
log-likelihood of a Gaussian-mixture, with ?posterior? mixture means w
?ij (n) and weights p?ji (n):
!
?i (n)
cij + ? w
?i (n))2
? (cji ? w
j
j
j
w
?i (n) =
.
(9)
, p?i (n) ? p?i (n) exp ?
1+?
?+1
2?i2
While there is no closed form solution to (8), we find that a reasonable approximation is to simply set
wi (n) to the posterior mean value w
?ij (n) for which weight p?ji (n) is the highest.
The second step at each iteration involves minimizing (6) with respect to y given the current estimates
of wi (n). This is a simple least-squares minimization given by
X
X
2
y = arg min ?
((ki ? y)(n) ? w(n)) +
k(?r ? y)(n)k2 .
(10)
y
r,n
i,n
Note that since all terms above are related to y by convolutions with different filters, we can carry out
this minimization very efficiently in the Fourier domain.
We initialize our iterations by setting wi (n) simply to the component mean cji for which our predicted
weight p?ji (n) is highest. Then, we apply the y and {wi (n)} minimization steps alternatingly, while
increasing ? from 2?10 to 27 , by a factor of 21/8 at each iteration.
4
Experimental Results
We train and evaluate our method on the NYU v2 depth dataset [11]. To construct our training and
validation sets, we adopt the standard practice of using the raw videos corresponding to the training
images from the official train/test split. We randomly select 10% of these videos for validation,
and use the rest for training our network. Our training set is formed by sub-sampling video frames
uniformly, and consists of roughly 56,000 color image-depth map pairs. Monocular depth estimation
algorithms are evaluated on their accuracy in the 561 ? 427 crop of the depth map that contains a
valid depth projection (including filled-in areas within this crop). We use the same crop of the color
image as input to our algorithm, and train our network accordingly.
We let the scene map y(n) in our formulation correspond to the reciprocal of metric depth, i.e., y(n) =
1/z(n). While other methods use different compressive transform (e.g., [1, 2] regress to log z(n)), our
choice is motivated by the fact that points on the image plane are related to their world co-ordinates
by a perspective transform. This implies, for example, that in planar regions the first derivatives of
y(n) will depend only on surface orientation, and that second derivatives will be zero.
4.1
Network Training
We use data augmentation during training, applying random rotations of ?5? and horizontal flips
simultaneously to images and depth maps, and random contrast changes to images. We use a fully
convolutional version of our architecture during training with a stride of 8 pixels, yielding nearly
4000 training patches per image. We train the network using SGD for a total of 14 epochs, using a
batch size of only one image and a momentum value of 0.9. We begin with a learning rate of 0.01,
and reduce it after the 4th , 8th , 10th , 12th , and 13th epochs, each time by a factor of two. This
schedule was set by tracking the post-globalization depth accuracy on a validation set.
4.2
Evaluation
First, we analyze the informativeness of individual distributional outputs from our neural network.
Figure 3 visualizes the accuracy and confidence of the local per-coefficient distributions produced by
our network on a typical image. For various derivative filters, we display maps of the absolute error
between the true coefficient values wi (n) and the mean of the corresponding predicted distributions
{pi,n (?)}. Alongside these errors, we also visualize the network?s ?confidence? in terms of a map of
the standard deviations of {pi,n (?)}. We see that the network makes high confidence predictions for
different derivatives in different regions, and that the number of such high confidence predictions is
least for zeroth order derivatives. Moreover, we find that all regions with high predicted confidence
6
Table 1: Effect of Individual Derivatives on Global Estimation Accuracy (on 100 validation images)
Filters
Full
Scale 0,1 (All orders)
Scale 0,1,2 (All orders)
Order 0 (All scales)
Order 0,1 (All scales)
Scale 0 (Pointwise Depth)
RMSE (lin.)
0.6921
0.7471
0.7241
0.7971
0.6966
0.7424
Lower Better
RMSE(log)
Abs Rel.
0.2533
0.1887
0.2684
0.2019
0.2626
0.1967
0.2775
0.2110
0.2542
0.1894
0.2656
0.2005
Sqr Rel.
0.1926
0.2411
0.2210
0.2735
0.1958
0.2177
? < 1.25
76.62%
75.33%
75.82%
73.64%
76.56%
74.50%
Higher Better
? < 1.252
91.58%
90.90%
91.12%
90.40%
91.53%
90.66%
? < 1.253
96.62%
96.28%
96.41%
95.99%
96.62%
96.30%
Figure 3: We visualize the informativeness of the local predictions from our network (on an image
from the validation set). We show the accuracy and confidence of the predicted distributions for
coefficients of different derivative filters (shown inset), in terms of the error between the distribution
mean and true coefficient value, and the distribution standard deviation respectively. We find that
errors are always low in regions of high confidence (low standard deviation). We also find that despite
the fact that individual coefficients have many low-confidence regions, our globalization procedure is
able to combine them to produce an accurate depth map.
(i.e., low standard deviation) also have low errors. Figure 3 also displays the corresponding global
depth estimates, along with their accuracy relative to the ground truth. We find that despite having
large low-confidence regions for individual coefficients, our final depth map is still quite accurate. This
suggests that the information from different coefficients? predicted distributions is complementary.
To quantitatively characterize the contribution of the various components of our overcomplete
representation, we conduct an ablation study on 100 validation images. With the same trained
network, we include different subsets of filter coefficients for global estimation?leaving out either
specific derivative orders, or scales?and report their accuracy in Table 1. We use the standard
metrics from [2] for accuracy between estimated and true depth values z?(n) and z(n) across all
pixels in all images: root mean square error (RMSE) of both z and log z, mean relative error
(|z(n) ? z?(n)|/z(n)) and relative square error (|z(n) ? z?(n)|2 /z(n)), as well as percentages of
pixels with error ? = max(z(n)/?
z (n), z?(n)/z(n)) below different thresholds. We find that removing
each of these subsets degrades the performance of the global estimation method?with second order
derivatives contributing least to final estimation accuracy. Interestingly, combining multiple scales but
with only zeroth order derivatives performs worse than using just the point-wise depth distributions.
Finally, we evaluate the performance of our method on the NYU v2 test set. Table 2 reports the
quantitative performance of our method, along with other state-of-the-art approaches over the entire
test set, and we find that the proposed method yields superior performance on most metrics. Figure 4
shows example predictions from our approach and that of [1]. We see that our approach is often able
to better reproduce local geometric structure in its predictions (desk & chair in column 1, bookshelf
in column 4), although it occasionally mis-estimates the relative position of some objects (e.g., globe
in column 5). At the same time, it is also usually able to correctly estimate the depth of large and
texture-less planar regions (but, see column 6 for an example failure case).
Our overall inference method (network predictions and globalization) takes 24 seconds per-image
when using an NVIDIA Titan X GPU. The source code for implementation, along with a pre-trained
network model, are available at http://www.ttic.edu/chakrabarti/mdepth.
7
Table 2: Depth Estimation Performance on NYUv2 [11] Test Set
Method
Proposed
Eigen 2015 [1] (VGG)
Wang [3]
Baig [5]
Eigen 2014 [2]
Liu [4]
Zoran [7]
RMSE (lin.)
0.620
0.641
0.745
0.802
0.877
0.824
1.22
Lower Better
RMSE(log)
Abs Rel.
0.205
0.149
0.214
0.158
0.262
0.220
0.241
0.283
0.214
0.230
0.43
0.41
Sqr Rel.
0.118
0.121
0.210
0.204
0.57
? < 1.25
80.6%
76.9%
60.5%
61.0%
61.4%
61.4%
-
Higher Better
? < 1.252
95.8%
95.0%
89.0%
88.8%
88.3%
-
? < 1.253
98.7%
98.8%
97.0%
97.2%
97.1%
-
Figure 4: Example depth estimation results on NYU v2 test set.
5
Conclusion
In this paper, we described an alternative approach to reasoning about scene geometry from a single
image. Instead of formulating the task as a regression to point-wise depth values, we trained a neural
network to probabilistically characterize local coefficients of the scene depth map in an overcomplete
representation. We showed that these local predictions could then be reconciled to form an estimate
of the scene depth map using an efficient globalization procedure. We demonstrated the utility of our
approach by evaluating it on the NYU v2 depth benchmark.
Its performance on the monocular depth estimation task suggests that our network?s local predictions
effectively summarize the depth cues present in a single image. In future work, we will explore how
these predictions can be used in other settings?e.g., to aid stereo reconstruction, or improve the
quality of measurements from active and passive depth sensors. We are also interested in exploring
whether our approach of training a network to make overcomplete probabilistic local predictions can
be useful in other applications, such as motion estimation or intrinsic image decomposition.
Acknowledgments. AC acknowledges support for this work from the National Science Foundation
under award no. IIS-1618021, and from a gift by Adobe Systems. AC and GS thank NVIDIA
Corporation for donations of Titan X GPUs used in this research.
8
References
[1] D. Eigen and R. Fergus. Predicting depth, surface normals and semantic labels with a common
multi-scale convolutional architecture. In Proc. ICCV, 2015.
[2] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a
multi-scale deep network. In NIPS, 2014.
[3] P. Wang, X. Shen, Z. Lin, S. Cohen, B. Price, and A. Yuille. Towards unified depth and semantic
prediction from a single image. In Proc. CVPR, 2015.
[4] F. Liu, C. Shen, and G. Lin. Deep convolutional neural fields for depth estimation from a single
image. In Proc. CVPR, 2015.
[5] M. Baig and L. Torresani. Coupled depth learning. In Proc. WACV, 2016.
[6] X. Wang, D. Fouhey, and A. Gupta. Designing deep networks for surface normal estimation. In
Proc. CVPR, 2015.
[7] D. Zoran, P. Isola, D. Krishnan, and W. T. Freeman. Learning ordinal relationships for mid-level
vision. In Proc. ICCV, 2015.
[8] K. Karsch, C. Liu, and S. B. Kang. Depth extraction from video using non-parametric sampling.
In Proc. ECCV. 2012.
[9] L. Ladicky, J. Shi, and M. Pollefeys. Pulling things out of perspective. In Proc. CVPR, 2014.
[10] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS,
2005.
[11] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference
from rgbd images. In Proc. ECCV. 2012.
[12] B. K. Horn and M. J. Brooks. Shape from shading. MIT Press, 1986.
[13] M. B. Clowes. On seeing things. Artificial intelligence, 1971.
[14] K. Sugihara. Machine interpretation of line drawings. MIT Press, 1986.
[15] A. Chakrabarti, Y. Xiong, S. Gortler, and T. Zickler. Low-level vision by consensus in a spatial
hierarchy of regions. In Proc. CVPR, 2015.
[16] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details:
Delving deep into convolutional nets. In Proc. BMVC, 2014.
[17] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation.
In Proc. CVPR, 2015.
9
| 6510 |@word kohli:1 version:1 norm:1 replicate:1 rgb:4 decomposition:1 sgd:1 shading:3 carry:2 initial:1 liu:3 contains:2 efficacy:1 disparity:1 hoiem:1 interestingly:1 current:2 z2:1 activation:3 gpu:1 chicago:5 informative:2 shape:1 cheap:1 cue:11 fewer:1 intelligence:1 parameterization:1 accordingly:1 plane:2 reciprocal:1 coarse:2 location:17 harmonize:1 along:6 zickler:1 chakrabarti:4 qij:3 consists:1 combine:3 ray:1 parallax:1 pairwise:1 indeed:1 roughly:1 multi:2 freeman:1 actual:1 increasing:2 gift:1 spain:1 estimating:3 underlying:1 moreover:4 maximizes:1 provided:1 begin:1 xx:1 kind:1 interpreted:2 compressive:1 unified:1 finding:1 corporation:1 certainty:1 quantitative:1 every:4 saxena:1 concave:1 k2:3 scaled:1 unit:1 producing:1 gortler:1 local:31 despite:3 planarity:1 path:4 zeroth:5 suggests:3 co:2 limited:2 bi:1 acknowledgment:1 camera:2 horn:1 practice:4 learningbased:1 procedure:4 area:1 cascade:2 vedaldi:1 convenient:1 projection:1 confidence:10 pre:3 regular:1 upsampled:1 seeing:1 context:1 applying:3 www:1 map:34 demonstrated:1 center:2 crfs:1 shi:1 independently:1 convex:2 formulate:1 resolution:1 shen:2 x32:1 parameterizing:1 jingyu:1 baig:2 hierarchy:1 us:2 designing:1 finetuned:1 distributional:5 coarser:1 wang:4 region:13 connected:1 ordering:2 highest:2 mentioned:1 intuition:1 environment:1 balanced:1 complexity:2 trained:11 zoran:3 shakhnarovich:1 solving:1 depend:1 serve:2 yuille:1 basis:2 shao:1 easily:1 various:7 train:9 committing:1 describe:1 pool5:1 artificial:1 neighborhood:1 whose:1 apparent:1 quite:1 solve:1 cvpr:6 drawing:1 ability:1 statistic:1 simonyan:1 transform:2 noisy:2 itself:5 final:4 net:1 propose:1 reconstruction:1 combining:1 ablation:1 date:1 achieve:1 normalize:2 los:1 cluster:3 darrell:1 produce:10 tti:3 object:3 derive:1 donation:1 ac:2 ij:2 progress:1 strong:1 recovering:2 predicted:6 auxiliary:2 indicate:2 involves:1 implies:1 owing:1 filter:14 centered:2 human:1 fix:1 summation:1 exploring:1 around:2 considered:1 ground:3 normal:3 exp:3 nyuv2:2 predict:4 kki:1 visualize:2 sought:1 achieves:1 early:1 adopt:1 estimation:19 proc:12 label:1 agrees:1 create:1 karsch:1 weighted:1 minimization:5 mit:2 clearly:1 sensor:3 gaussian:5 always:1 modified:1 harmonizing:2 rather:1 probabilistically:1 indicates:2 likelihood:4 contrast:3 inference:8 typically:1 entire:2 hidden:1 reproduce:1 interested:1 pixel:7 overall:2 arg:4 orientation:10 augment:1 art:2 spatial:2 fairly:1 softmax:1 initialize:1 equal:1 construct:1 field:2 having:1 reasoned:1 sampling:2 extraction:1 ng:1 nearly:1 future:1 minimized:1 others:1 report:2 quantitatively:1 torresani:1 employ:1 fouhey:1 randomly:1 foreshortening:1 simultaneously:1 divergence:1 national:1 individual:6 familiar:1 geometry:7 ab:2 freedom:1 interest:1 evaluation:2 mixture:10 yielding:1 behind:1 accurate:2 edge:1 partial:1 orthogonal:1 filled:1 conduct:1 overcomplete:8 column:4 assignment:1 deviation:5 subset:3 predictor:1 graphic:1 front:1 characterize:6 gregory:2 combined:1 probabilistic:1 augmentation:1 ambiguity:2 choose:2 worse:1 corner:1 derivative:34 chung:1 return:1 potential:1 stride:1 student:1 summarized:1 availability:1 coefficient:36 includes:1 titan:2 explicitly:3 depends:1 later:1 root:1 closed:1 analyze:1 characterizes:1 red:1 recover:2 rmse:5 contribution:1 formed:1 square:3 il:2 accuracy:10 variance:5 convolutional:7 efficiently:2 largely:2 correspond:2 yield:2 who:1 sqr:2 raw:1 produced:1 alternatingly:1 visualizes:1 failure:1 regress:4 mi:1 dataset:1 color:6 knowledge:1 dimensionality:1 segmentation:2 schedule:1 globalization:13 back:1 higher:3 day:1 planar:2 zisserman:1 bmvc:1 formulation:1 done:2 evaluated:1 just:1 horizontal:1 overlapping:2 quality:1 impulse:1 pulling:1 effect:1 contain:2 requiring:1 true:4 regularization:2 alternating:3 imaged:1 iteratively:1 i2:10 semantic:4 adjacent:1 during:3 ambiguous:2 crf:1 demonstrate:1 performs:1 motion:1 passive:1 reasoning:2 image:50 wise:4 novel:1 common:3 rotation:1 superior:1 ji:9 cohen:1 interpretation:1 interpret:1 measurement:5 composition:1 imposing:1 smoothness:1 supervision:1 surface:9 deduce:1 etc:1 add:1 j:1 curvature:1 posterior:2 recent:2 showed:1 perspective:3 optimizing:1 occasionally:1 nvidia:2 success:1 seen:1 minimum:1 isola:1 determine:1 converge:1 ii:1 multiple:6 desirable:1 full:2 smooth:1 match:1 characterized:1 faster:1 long:1 lin:4 post:1 award:1 laplacian:1 adobe:1 qi:3 prediction:21 crop:3 globe:1 regression:1 vision:4 expectation:1 metric:3 physically:1 iteration:7 represent:1 robotics:1 addition:1 ayan:1 separately:1 source:2 leaving:1 rest:2 thing:2 flow:1 structural:1 leverage:1 intermediate:1 split:1 krishnan:1 affect:1 fit:1 relu:3 architecture:4 reduce:2 vgg:4 angeles:1 whether:2 motivated:3 chatfield:1 cji:7 utility:1 passed:1 reuse:1 stereo:3 passing:1 deep:4 useful:6 detailed:1 involve:1 amount:1 mid:2 desk:1 reduced:1 generate:1 http:1 percentage:1 estimated:2 per:5 correctly:1 blue:1 broadly:1 diverse:1 pollefeys:1 express:1 key:1 four:1 threshold:1 respresentation:1 parameterized:2 reasonable:1 patch:5 summarizes:1 dropout:1 layer:11 ki:8 followed:3 display:2 refine:1 g:1 precisely:1 ladicky:1 scene:29 diffuse:1 encodes:1 ucla:2 dominated:1 generates:1 aspect:7 fourier:2 min:4 chair:1 formulating:1 optical:1 gpus:1 across:3 wi:32 making:1 outlier:1 iccv:2 computationally:1 monocular:10 r3:1 ordinal:2 flip:1 end:2 available:2 gaussians:1 eight:1 apply:1 v2:5 upto:1 xiong:1 batch:1 alternative:1 pji:3 eigen:6 top:1 clustering:1 include:2 exploit:1 reflectance:1 restrictive:1 concatenated:2 silberman:1 objective:6 parametric:5 degrades:1 visiting:1 gradient:1 separate:2 thank:1 upsampling:1 landmark:1 seven:1 evaluate:3 consensus:1 reason:2 consumer:1 code:1 index:1 relationship:2 pointwise:1 downsampling:1 minimizing:2 cij:1 negative:1 implementation:2 allowing:2 convolution:5 benchmark:1 frame:1 ttic:3 inferred:1 ordinate:2 pair:4 kl:1 puhrsch:1 coherent:1 learned:2 kang:1 barcelona:1 nip:3 brook:1 beyond:1 able:7 alongside:1 below:1 usually:1 indoor:1 summarize:1 including:2 green:1 max:2 video:4 natural:1 force:1 predicting:6 scheme:1 improve:1 carried:2 acknowledges:1 extract:3 coupled:1 prior:3 geometric:5 epoch:2 contributing:1 relative:7 loss:3 expect:2 fully:4 wacv:1 validation:6 foundation:1 shelhamer:1 degree:2 consistent:4 proxy:1 informativeness:2 bank:1 pi:9 heavy:1 eccv:2 last:1 keeping:1 allow:1 deeper:1 sugihara:1 characterizing:1 absolute:1 sparse:1 benefit:1 boundary:1 depth:79 valid:1 world:1 contour:2 rich:3 evaluating:1 projected:1 far:1 approximate:2 global:9 active:1 fergus:3 decade:1 tailed:1 table:4 additionally:1 learn:1 delving:1 ca:1 unavailable:1 domain:2 official:1 reconciled:1 dense:1 linearly:1 whole:1 motivation:1 complementary:1 rgbd:1 bookshelf:1 fig:3 aid:1 sub:1 position:2 momentum:1 explicit:2 weighting:1 third:1 learns:2 removing:1 specific:2 inset:1 nyu:5 gupta:1 intrinsic:1 rel:4 adding:1 effectively:3 texture:2 magnitude:1 illumination:1 photograph:1 simply:2 explore:1 visual:1 tracking:1 ayanc:1 truth:3 conditional:1 sized:1 consequently:2 towards:3 price:1 absence:1 change:1 specifically:3 typical:1 uniformly:1 averaging:1 total:4 experimental:1 occluding:1 select:1 support:3 devil:1 accelerated:1 incorporate:1 dept:1 |
6,093 | 6,511 | Combinatorial Multi-Armed Bandit with General
Reward Functions
Wei Chen?
Wei Hu?
Fu Li?
Jian Li?
Yu Liu?
Pinyan Luk
Abstract
In this paper, we study the stochastic combinatorial multi-armed bandit (CMAB)
framework that allows a general nonlinear reward function, whose expected value
may not depend only on the means of the input random variables but possibly
on the entire distributions of these variables. Our framework enables a much
larger class of reward functions such as the max() function and nonlinear utility
functions. Existing techniques relying on accurate estimations of the means of
random variables, such as the upper confidence bound (UCB) technique, do not
work directly on these functions. We propose a new algorithm called stochastically
dominant confidence bound (SDCB), which estimates the distributions of underlying random variables and their stochastically dominant confidence bounds.?We
? T)
prove that SDCB can achieve O(log T ) distribution-dependent regret and O(
distribution-independent regret, where T is the time horizon. We apply our results
to the K-MAX problem and expected utility maximization problems. In particular,
for K-MAX, we provide the first polynomial-time
approximation scheme (PTAS)
?
? T ) bound on the (1?)-approximation
for its offline problem, and give the first O(
regret of its online problem, for any > 0.
1
Introduction
Stochastic multi-armed bandit (MAB) is a classical online learning problem typically specified as a
player against m machines or arms. Each arm, when pulled, generates a random reward following an
unknown distribution. The task of the player is to select one arm to pull in each round based on the
historical rewards she collected, and the goal is to collect cumulative reward over multiple rounds as
much as possible. In this paper, unless otherwise specified, we use MAB to refer to stochastic MAB.
MAB problem demonstrates the key tradeoff between exploration and exploitation: whether the
player should stick to the choice that performs the best so far, or should try some less explored
alternatives that may provide better rewards. The performance measure of an MAB strategy is its
cumulative regret, which is defined as the difference between the cumulative reward obtained by
always playing the arm with the largest expected reward and the cumulative reward achieved by the
learning strategy. MAB and its variants have been extensively
? studied in the literature, with classical
results such as tight ?(log T ) distribution-dependent and ?( T ) distribution-independent upper and
lower bounds on the regret in T rounds [19, 2, 1].
An important extension to the classical MAB problem is combinatorial multi-armed bandit (CMAB).
In CMAB, the player selects not just one arm in each round, but a subset of arms or a combinatorial
?
Microsoft Research, email: weic@microsoft.com. The authors are listed in alphabetical order.
Princeton University, email: huwei@cs.princeton.edu.
?
The University of Texas at Austin, email: fuli.theory.research@gmail.com.
?
Tsinghua University, email: lapordge@gmail.com.
?
Tsinghua University, email: liuyujyyz@gmail.com.
k
Shanghai University of Finance and Economics, email: lu.pinyan@mail.shufe.edu.cn.
?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
object in general, referred to as a super arm, which collectively provides a random reward to the
player. The reward depends on the outcomes from the selected arms. The player may observe partial
feedbacks from the selected arms to help her in decision making. CMAB has wide applications
in online advertising, online recommendation, wireless routing, dynamic channel allocations, etc.,
because in all these settings the action unit is a combinatorial object (e.g. a set of advertisements, a
set of recommended items, a route in a wireless network, and an allocation between channels and
users), and the reward depends on unknown stochastic behaviors (e.g. users? click through behaviors,
wireless transmission quality, etc.). Therefore CMAB has attracted a lot of attention in online learning
research in recent years [12, 8, 22, 15, 7, 16, 18, 17, 23, 9].
Most of these studies focus on linear reward functions, for which the expected reward for playing a
super arm is a linear combination of the expected outcomes from the constituent base arms. Even for
studies that do generalize to non-linear reward functions, they typically still assume that the expected
reward for choosing a super arm is a function of the expected outcomes from the constituent base
arms in this super arm [8, 17]. However, many natural reward functions do not satisfy this property.
For example, for the function max(), which takes a group of variables and outputs the maximum one
among them, its expectation depends on the full distributions of the input random variables, not just
their means. Function max() and its variants underly many applications. As an illustrative example,
we consider the following scenario in auctions: the auctioneer is repeatedly selling an item to m
bidders; in each round the auctioneer selects K bidders to bid; each of the K bidders independently
draws her bid from her private valuation distribution and submits the bid; the auctioneer uses the
first-price auction to determine the winner and collects the largest bid as the payment.1 The goal of
the auctioneer is to gain as high cumulative payments as possible. We refer to this problem as the
K-MAX bandit problem, which cannot be effectively solved in the existing CMAB framework.
Beyond the K-MAX problem, many expected utility maximization (EUM) problems are studied
in stochastic
optimization literature [27, 20, 21, 4]. The problem can be formulated as maximizing
P
E[u( i?S Xi )] among all feasible sets S, where Xi ?s are independent random variables and u(?) is
a utility function. For example, Xi could be the random delay of edge ei in a routing graph, S is a
routing path in the graph, and the objective is maximizing the utility obtained from any routing path,
and typically the shorter the delay, the larger the utility. The utility function u(?) is typically nonlinear
to model risk-averse or risk-prone behaviors of users (e.g. a concave utility function is often used to
model risk-averse behaviors). The non-linear utility function makes the objective function much more
complicated: in particular, it is no longer a function of the means of the underlying random variables
Xi ?s. When the distributions of Xi ?s are unknown, we can turn EUM into an online learning problem
where the distributions of Xi ?s need to be learned over time from online feedbacks, and we want to
maximize the cumulative reward in the learning process. Again, this is not covered by the existing
CMAB framework since only learning the means of Xi ?s is not enough.
In this paper, we generalize the existing CMAB framework with semi-bandit feedbacks to handle
general reward functions, where the expected reward for playing a super arm may depend more
than just the means of the base arms, and the outcome distribution of a base arm can be arbitrary.
This generalization is non-trivial, because almost all previous works on CMAB rely on estimating
the expected outcomes from base arms, while in our case, we need an estimation method and an
analytical tool to deal with the whole distribution, not just its mean. To this end, we turn the problem
into estimating the cumulative distribution function (CDF) of each arm?s outcome distribution. We
use stochastically dominant confidence bound (SDCB) to obtain a distribution that stochastically
dominates the true distribution with high probability, and hence
? we also name our algorithm SDCB.
? T ) distribution-independent regret
We are able to show O(log T ) distribution-dependent and O(
bounds in T rounds. Furthermore, we propose a more efficient algorithm called Lazy-SDCB, which
first executes a discretization step
? and then applies SDCB on the discretized problem. We show that
? T ) distribution-independent regret bound. Our regret bounds are
Lazy-SDCB also achieves O(
tight with respect to their dependencies on T (up to a logarithmic factor for distribution-independent
bounds). To make our scheme work, we make a few reasonable assumptions, including boundedness,
monotonicity and Lipschitz-continuity2 of the reward function, and independence among base arms.
We apply our algorithms to the K-MAX and EUM problems, and provide efficient solutions with
concrete regret bounds. Along the way, we also provide the first polynomial time approximation
1
We understand that the first-price auction is not truthful, but this example is only for illustrative purpose for
the max() function.
2
The Lipschitz-continuity assumption is only made for Lazy-SDCB. See Section 4.
2
scheme (PTAS) for the offline K-MAX problem, which is formulated as maximizing E[maxi?S Xi ]
subject to a cardinality constraint |S| ? K, where Xi ?s are independent nonnegative random
variables.
To summarize, our contributions include: (a) generalizing the CMAB framework to allow a general
reward function whose expectation may depend on the entire distributions of the input random
variables; (b) proposing the SDCB algorithm to achieve efficient learning in this framework with
near-optimal regret bounds, even for arbitrary outcome distributions; (c) giving the first PTAS for the
offline K-MAX problem. Our general framework treats any offline stochastic optimization algorithm
as an oracle, and effectively integrates it into the online learning framework.
Related Work. As already mentioned, most relevant to our work are studies on CMAB frameworks,
among which [12, 16, 18, 9] focus on linear reward functions while [8, 17] look into non-linear
reward functions. In particular, Chen et al. [8] look at general non-linear reward functions and Kveton
et al. [17] consider specific non-linear reward functions in a conjunctive or disjunctive form, but
both papers require that the expected reward of playing a super arm is determined by the expected
outcomes from base arms.
The only work in combinatorial bandits we are aware of that does not require the above assumption on
the expected reward is [15], which is based on a general Thompson sampling framework. However,
they assume that the joint distribution of base arm outcomes is from a known parametric family within
known likelihood function and only the parameters are unknown. They also assume the parameter
space to be finite. In contrast, our general case is non-parametric, where we allow arbitrary bounded
distributions. Although in our known finite support case the distribution can be parametrized by
probabilities on all supported points, our parameter space is continuous. Moreover, it is unclear how
to efficiently compute posteriors in their algorithm, and their regret bounds depend on complicated
problem-dependent coefficients which may be very large for many combinatorial problems. They
also provide a result on the K-MAX problem, but they only consider Bernoulli outcomes from base
arms, much simpler than our case where general distributions are allowed.
There are extensive studies on the classical MAB problem, for which we refer to a survey by Bubeck
and Cesa-Bianchi [5]. There are also some studies on adversarial combinatorial bandits, e.g. [26, 6].
Although it bears conceptual similarities with stochastic CMAB, the techniques used are different.
Expected utility maximization (EUM) encompasses a large class of stochastic optimization problems
and has been well studied (e.g. [27, 20, 21, 4]). To the best of our knowledge, we are the first to study
the online learning version of these problems, and we provide a general solution to systematically
address all these problems as long as there is an available offline (approximation) algorithm. The
K-MAX problem may be traced back to [13], where Goel et al. provide a constant approximation
algorithm to a generalized version in which the objective is to choose a subset S of cost at most K
and maximize the expectation of a certain knapsack profit.
2
Setup and Notation
Problem Formulation. We model a combinatorial multi-armed bandit (CMAB) problem as a tuple
(E, F, D, R), where E = [m] = {1, 2, . . . , m} is a set of m (base) arms, F ? 2E is a set of subsets
of E, D is a probability distribution over [0, 1]m , and R is a reward function defined on [0, 1]m ? F.
The arms produce stochastic outcomes X = (X1 , X2 , . . . , Xm ) drawn from distribution D, where
the i-th entry Xi is the outcome from the i-th arm. Each feasible subset of arms S ? F is called a
super arm. Under a realization of outcomes x = (x1 , . . . , xm ), the player receives a reward R(x, S)
when she chooses the super arm S to play. Without loss of generality, we assume the reward value to
be nonnegative. Let K = maxS?F |S| be the maximum size of any super arm.
Let X (1) , X (2) , . . . be an i.i.d. sequence of random vectors drawn from D, where X (t) =
(t)
(t)
(X1 , . . . , Xm ) is the outcome vector generated in the t-th round. In the t-th round, the player
(t)
chooses a super arm St ? F to play, and then the outcomes from all arms in St , i.e., {Xi | i ? St },
are revealed to the player. According to the definition of the reward function, the reward value in the
t-th round is R(X (t) , St ). The expected reward for choosing a super arm S in any round is denoted
by rD (S) = EX?D [R(X, S)].
3
We also assume that for a fixed super arm S ? F, the reward R(x, S) only depends on the revealed
outcomes xS = (xi )i?S . Therefore, we can alternatively express R(x, S) as RS (xS ), where RS is a
function defined on [0, 1]S .3
A learning algorithm A for the CMAB problem selects which super arm to play in each round
based on the revealed outcomes in all previous rounds. Let StA be the super arm selected by A
in the
t-th round.4 The goal
to maximize the expected cumulative reward in T rounds, which
hP
i is
PT
T
(t)
A
is E
, St ) = t=1 E rD (StA ) . Note that when the underlying distribution D is
t=1 R(X
known, the optimal algorithm A? chooses the optimal super arm S ? = argmaxS?F {rD (S)} in every
round. The quality of an algorithm A is measured by its regret in T rounds, which is the difference
between the expected cumulative reward of the optimal algorithm A? and that of A:
?
RegA
D (T ) = T ? rD (S ) ?
T
X
E rD (StA ) .
t=1
For some CMAB problem instances, the optimal super arm S ? may be computationally hard to find
even when the distribution D is known, but efficient approximation algorithms may exist, i.e., an
?-approximate (0 < ? ? 1) solution S 0 ? F which satisfies rD (S 0 ) ? ? ? maxS?F {rD (S)} can be
efficiently found given D as input. We will provide the exact formulation of our requirement on such
an ?-approximation computation oracle shortly. In such cases, it is not fair to compare a CMAB
algorithm A with the optimal algorithm A? which always chooses the optimal super arm S ? . Instead,
we define the ?-approximation regret of an algorithm A as
?
RegA
D,? (T ) = T ? ? ? rD (S ) ?
T
X
E rD (StA ) .
t=1
As mentioned, almost all previous work on CMAB requires that the expected reward rD (S) of
a super arm S depends only on the expectation vector ? = (?1 , . . . , ?m ) of outcomes, where
?i = EX?D [Xi ]. This is a strong restriction that cannot be satisfied by a general nonlinear function
RS and a general distribution D. The main motivation of this work is to remove this restriction.
Assumptions. Throughout this paper, we make several assumptions on the outcome distribution D
and the reward function R.
Assumption 1 (Independent outcomes from arms). The outcomes from all m arms are mutually
independent, i.e., for X ? D, X1 , X2 , . . . , Xm are mutually independent. We write D as D =
D1 ? D2 ? ? ? ? ? Dm , where Di is the distribution of Xi .
We remark that the above independence assumption is also made for past studies on the offline EUM
and K-MAX problems [27, 20, 21, 4, 13], so it is not an extra assumption for the online learning case.
Assumption 2 (Bounded reward value). There exists M > 0 such that for any x ? [0, 1]m and any
S ? F, we have 0 ? R(x, S) ? M .
Assumption 3 (Monotone reward function). If two vectors x, x0 ? [0, 1]m satisfy xi ? x0i (?i ? [m]),
then for any S ? F, we have R(x, S) ? R(x0 , S).
Computation Oracle for Discrete Distributions with Finite Supports. We require that there
exists an ?-approximation computation oracle (0 < ? ? 1) for maximizing rD (S), when each Di
(i ? [m]) has a finite support. In this case, Di can be fully described by a finite set of numbers
(i.e., its support {vi,1 , vi,2 , . . . , vi,si } and the values of its cumulative distribution function (CDF)
Fi on the supported points: Fi (vi,j ) = PrXi ?Di [Xi ? vi,j ] (j ? [si ])). The oracle takes such a
representation of D as input, and can output a super arm S 0 = Oracle(D) ? F such that rD (S 0 ) ?
? ? maxS?F {rD (S)}.
3
SDCB Algorithm
[0, 1]S is isomorphic to [0, 1]|S| ; the coordinates in [0, 1]S are indexed by elements in S.
Note that StA may be random due to the random outcomes in previous rounds and the possible randomness
used by A.
3
4
4
Algorithm 1 SDCB (Stochastically dominant confidence bound)
1: Throughout the algorithm, for each arm i ? [m], maintain: (i) a counter Ti which stores the
? i of the
number of times arm i has been played so far, and (ii) the empirical distribution D
?
observed outcomes from arm i so far, which is represented by its CDF Fi
2: // Initialization
3: for i = 1 to m do
4:
// Action in the i-th round
5:
Play a super arm Si that contains arm i
6:
Update Tj and F?j for each j ? Si
7: end for
8: for t = m + 1, m + 2, . . . do
9:
// Action in the t-th round
10:
For each i ? [m], let Di be a distribution whose CDF Fi is
(
Fi (x) =
max{F?i (x) ?
1,
q
3 ln t
2Ti , 0},
0?x<1
x=1
11:
Play the super arm St ? Oracle(D), where D = D1 ? D2 ? ? ? ? ? Dm
12:
Update Tj and F?j for each j ? St
13: end for
We present our algorithm stochastically dominant confidence bound (SDCB) in Algorithm 1. Throughout the algorithm, we store, in a variable Ti , the number of times the outcomes from arm i are observed
? i of the observed outcomes from arm i so far,
so far. We also maintain the empirical distribution D
?
which can be represented by its CDF Fi : for x ? [0, 1], the value of F?i (x) is just the fraction of
the observed outcomes from arm i that are no larger than x. Note that F?i is always a step function
which has ?jumps? at the points that are observed outcomes from arm i. Therefore it suffices to store
these discrete points as well as the values of F?i at these points in order to store the whole function
F?i . Similarly, the later computation of stochastically dominant CDF Fi (line 10) only requires
computation at these points, and the input to the offline oracle only needs to provide these points and
corresponding CDF values (line 11).
The algorithm starts with m initialization rounds in which each arm is played at least once5 (lines 2-7).
In the t-th round (t > m), the algorithm consists of three steps. First, it calculates for each i ? [m] a
distribution Di whose CDF Fi is obtained by lowering the CDF F?i (line 10). The second step is to
call the ?-approximation oracle with the newly constructed distribution D = D1 ? ? ? ? ? Dm as input
(line 11), and thus the super arm St output by the oracle satisfies rD (St ) ? ? ? maxS?F {rD (S)}.
Finally, the algorithm chooses the super arm St to play, observes the outcomes from all arms in St ,
and updates Tj ?s and F?j ?s accordingly for each j ? St .
The idea behind our algorithm is the optimism in the face of uncertainty principle, which is the key
principle behind UCB-type algorithms. Our algorithm ensures that with high probability we have
Fi (x) ? Fi (x) simultaneously for all i ? [m] and all x ? [0, 1], where Fi is the CDF of the outcome
distribution Di . This means that each Di has first-order stochastic dominance over Di .6 Then from
the monotonicity property of R(x, S) (Assumption 3) we know that rD (S) ? rD (S) holds for all
S ? F with high probability. Therefore D provides an ?optimistic? estimation on the expected
reward from each super arm.
?
Regret Bounds. We prove O(log T ) distribution-dependent and O( T log T ) distributionindependent upper bounds on the regret of SDCB (Algorithm 1).
5
Without loss of generality, we assume that each arm i ? [m] is contained in at least one super arm.
We remark that while Fi (x) is a numerical lower confidence bound on Fi (x) for all x ? [0, 1], at the
distribution level, Di serves as a ?stochastically dominant (upper) confidence bound? on Di .
6
5
We call a super arm S bad if rD (S) < ? ? rD (S ? ). For each super arm S, we define
?S = max{? ? rD (S ? ) ? rD (S), 0}.
Let FB = {S ? F | ?S > 0}, which is the set of all bad super arms. Let EB ? [m] be the set of
arms that are contained in at least one bad super arm. For each i ? EB , we define
?i,min = min{?S | S ? FB , i ? S}.
Recall that M is an upper bound on the reward value (Assumption 2) and K = maxS?F |S|.
Theorem 1. A distribution-dependent upper bound on the ?-approximation regret of SDCB (Algorithm 1) in T rounds is
2
X 2136
?
2
ln T +
+ 1 ?M m,
M K
?i,min
3
i?EB
and a distribution-independent upper bound is
2
?
?
93M mKT ln T +
+ 1 ?M m.
3
The proof of Theorem 1 is given in the supplementary material. The main idea is to reduce our
analysis on general reward functions satisfying
P Assumptions 1-3 to the one in [18] that deals with
the summation reward function R(x, S) = i?S xi . Our analysis relies on the Dvoretzky-KieferWolfowitz inequality [10, 24], which gives a uniform concentration bound on the empirical CDF of a
distribution.
Applying Our Algorithm to the Previous CMAB Framework. Although our focus is on general
reward functions, we note that when SDCB is applied to the previous CMAB framework where the
expected reward depends only on the means of the random variables, it can achieve the same regret
bounds as the previous combinatorial upper confidence bound (CUCB) algorithm in [8, 18].
Let ?i = EX?D [Xi ] be arm i?s mean outcome. In each round CUCB calculates (for each arm i) an
upper confidence bound ?
?i on ?i , with the essential property that ?i ? ?
?i ? ?i + ?i holds with
high probability, for some ?i > 0. In SDCB, we use Di as a stochastically dominant confidence
bound of Di . We can show that ?i ? EYi ?Di [Yi ] ? ?i + ?i holds with high probability, with the
same interval length ?i as in CUCB. (The proof is given in the supplementary material.) Hence, the
analysis in [8, 18] can be applied to SDCB, resulting in the same regret bounds.We further remark that
in this case we do not need the three assumptions stated in Section 2 (in particular the independence
assumption on Xi ?s): the summation reward case just works as in [18] and the nonlinear reward case
relies on the properties of monotonicity and bounded smoothness used in [8].
4
Improved SDCB Algorithm by Discretization
In Section 3, we have shown that our algorithm SDCB achieves near-optimal regret bounds. However,
that algorithm might suffer from large running time and memory usage. Note that, in the t-th round,
an arm i might have been observed t ? 1 times already, and it is possible that all the observed values
from arm i are different (e.g., when arm i?s outcome distribution Di is continuous). In such case,
it takes ?(t) space to store the empirical CDF F?i of the observed outcomes from arm i, and both
calculating the stochastically dominant CDF Fi and updating F?i take ?(t) time. Therefore, the
worst-case space usage of SDCB in T rounds is ?(T ), and the worst-case running time is ?(T 2 )
(ignoring the dependence on m and K); here we do not count the time and space used by the offline
computation oracle.
In this section, we propose an improved
algorithm Lazy-SDCB which reduces the worst-case
?
? memory
usage and running time to O( T ) and O(T 3/2 ), respectively, while preserving the O( T log T )
distribution-independent regret bound. To this end, we need an additional assumption on the reward
function:
Assumption 4 (Lipschitz-continuous reward function). There exists C > 0 such that for any S ? F
0
m
0
0
0
and
P any x, x 0 ? [0, 1] , we have |R(x, S) ? R(x , S)| ? CkxS ? xS k1 , where kxS ? xS k1 =
i?S |xi ? xi |.
6
Algorithm 2 Lazy-SDCB with known time horizon
Input: time
? horizon T
1: s ? d T e
1
[0, s ],
j=1
2: Ij ?
j
( j?1
,
],
j
= 2, . . . , s
s
s
3: Invoke SDCB (Algorithm 1) for T rounds, with the following change: whenever observing an
outcome x (from any arm), find j ? [s] such that x ? Ij , and regard this outcome as sj
Algorithm 3 Lazy-SDCB without knowing the time horizon
1: q ? dlog2 me
2: In rounds 1, 2, . . . , 2q , invoke Algorithm 2 with input T = 2q
3: for k = q, q + 1, q + 2, . . . do
4:
In rounds 2k + 1, 2k + 2, . . . , 2k+1 , invoke Algorithm 2 with input T = 2k
5: end for
We first describe the algorithm when the time horizon T is known in advance. The algorithm is
summarized in Algorithm 2. We perform a discretization on the distribution D = D1 ? ? ? ? ? Dm to
? =D
?1 ? ? ? ? ? D
? m such that (i) for X
? ? D,
? X
?1, . . . , X
? m are also
obtain a discrete distribution D
? i is supported on a set of equally-spaced values { 1 , 2 , . . . , 1},
mutually independent, and (ii) every D
s s
?
where s is set to be d T e. Specifically, we partition [0, 1] into s intervals: I1 = [0, 1s ], I2 =
s?1
s?1
?
( 1s , 2s ], . . . , Is?1 = ( s?2
s , s ], Is = ( s , 1], and define Di as
? i = j/s] =
Pr [X
? i ?D
?i
X
Pr [Xi ? Ij ] ,
Xi ?Di
j = 1, . . . , s.
For the CMAB problem ([m], F, D, R), our algorithm ?pretends? that the outcomes are drawn from
? instead of D, by replacing any outcome x ? Ij by j (?j ? [s]), and then applies SDCB to the
D
s
? R). Since each D
? i has a known support { 1 , 2 , . . . , 1}, the algorithm only needs
problem ([m], F, D,
s s
to maintain the number of occurrences of each support value in order to obtain the empirical CDF of
all the observed
using
? outcomes from arm i. Therefore, all the operations in a round can be done
3/2
O(s)
=
O(
T
)
time
and
space,
and
the
total
time
and
space
used
by
Lazy-SDCB
are
O(T
) and
?
O( T ), respectively.
The discretization parameter s in Algorithm 2 depends on the time horizon T , which is why Algorithm 2 has to know T in advance. We can use the doubling trick to avoid the dependency on T . We
present such an algorithm (without knowing T ) in Algorithm 3. It is easy to see that Algorithm 3 has
the same asymptotic time and space usages as Algorithm 2.
?
Regret Bounds. We show that both Algorithm 2 and Algorithm 3 achieve O( T log T )
distribution-independent regret bounds. The full proofs are given in the supplementary material.
Recall that C is the coefficient in the Lipschitz condition in Assumption 4.
Theorem 2. Suppose the time horizon T is known in advance. Then the ?-approximation regret of
Algorithm 2 in T rounds is at most
2
?
?
?
93M mKT ln T + 2CK T +
+ 1 ?M m.
3
Proof Sketch. The regret consists of two parts: (i) the regret for the discretized CMAB problem
? R), and (ii) the error due to discretization. We directly apply Theorem 1 for the first
([m], F, D,
part. For the second part, a key step is to show |rD (S) ? rD? (S)| ? CK/s for all S ? F (see the
supplementary material).
Theorem 3. For any time horizon T ? 2, the ?-approximation regret of Algorithm 3 in T rounds is
at most
?
?
318M mKT ln T + 7CK T + 10?M m ln T.
7
5
Applications
We describe the K-MAX problem and the class of expected utility maximization problems as
applications of our general CMAB framework.
The K-MAX Problem. In this problem, the player is allowed to select at most K arms from the
set of m arms in each round, and the reward is the maximum one among
from
the outcomes
the
selected arms. In other words, the set of feasible super arms is F = S ? [m] |S| ? K , and
the reward function is R(x, S) = maxi?S xi . It is easy to verify that this reward function satisfies
Assumptions 2, 3 and 4 with M = C = 1.
Now we consider the corresponding offline K-MAX problem of selecting at most K arms from
m independent arms, with the largest expected reward. It can be implied by a result in [14] that
finding the exact optimal solution is NP-hard, so we resort to approximation algorithms. We can
show, using submodularity, that a simple greedy algorithm can achieve a (1 ? 1/e)-approximation.
Furthermore, we give the first PTAS for this problem. Our PTAS can be generalized to constraints
other than the cardinality constraint |S| ? K, including s-t simple paths, matchings, knapsacks, etc.
The algorithms and corresponding proofs are given in the supplementary material.
Theorem 4. There exists a PTAS for the offline K-MAX problem. In other words, for any constant
> 0, there is a polynomial-time (1 ? )-approximation algorithm for the offline K-MAX problem.
We thus can apply our SDCB?algorithm to the K-MAX bandit problem and obtain O(log T )
? T ) distribution-independent regret bounds according to Theorem 1,
distribution-dependent and O(
?
? T ) distribution-independent bound according to Theorem 2 or 3.
or can apply Lazy-SDCB to get O(
Streeter and Golovin [26] study an online submodular maximization problem in the oblivious
adversary model. In particular,
their result can cover the stochastic K-MAX bandit problem as a
?
special case, and an O(K mT log m) upper bound on the (1 ? 1/e)-regret can be shown. While
the techniques in [26] can?only give a bound on the (1 ? 1/e)-approximation regret for K-MAX,
? T ) bound on the (1 ? )-approximation regret for any constant > 0,
we can obtain the first O(
using our PTAS as the offline oracle. Even when we use the simple greedy algorithm as the oracle,
our experiments show that SDCB performs significantly better than the algorithm in [26] (see the
supplementary material).
Expected Utility Maximization.
Our framework can also be applied to reward functions of the
P
form R(x, S) = u( i?S xi ), where u(?) is an increasing
utility function. The corresponding offline
P
problem is to maximize the expected utility E[u( i?S xi )] subject to a feasibility constraint S ? F.
Note that if u is nonlinear, the expected utility may not be a function of the means of the arms in
S. Following the celebrated von Neumann-Morgenstern expected utility theorem, nonlinear utility
functions have been extensively used to capture risk-averse or risk-prone behaviors in economics (see
e.g., [11]), while linear utility functions correspond to risk-neutrality.
Li and Deshpande [20] obtain a PTAS for the expected utility maximization (EUM) problem for
several classes of utility functions (including for example increasing concave functions which
typically indicate risk-averseness), and a large class of feasibility constraints (including cardinality
constraint, s-t simple paths, matchings, and knapsacks). Similar results for other utility functions and
feasibility constraints can be found in [27, 21, 4]. In the online problem, we can apply our algorithms,
using their PTASs as the offline oracle. Again, we can obtain the first tight regret bounds on the
(1 ? )-approximation regret for any > 0, for the class of online EUM problems.
Acknowledgments
Wei Chen was supported in part by the National Natural Science Foundation of China (Grant No.
61433014). Jian Li and Yu Liu were supported in part by the National Basic Research Program
of China grants 2015CB358700, 2011CBA00300, 2011CBA00301, and the National NSFC grants
61033001, 61361136003. The authors would like to thank Tor Lattimore for referring to us the DKW
inequality.
8
References
[1] Jean-Yves Audibert and S?bastien Bubeck. Minimax policies for adversarial and stochastic bandits. In
COLT, pages 217?226, 2009.
[2] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine learning, 47(2-3):235?256, 2002.
[3] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[4] Anand Bhalgat and Sanjeev Khanna. A utility equivalence theorem for concave functions. In IPCO, pages
126?137. Springer, 2014.
[5] S?bastien Bubeck and Nicol? Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[6] Nicolo Cesa-Bianchi and G?bor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences,
78(5):1404?1422, 2012.
[7] Shouyuan Chen, Tian Lin, Irwin King, Michael R. Lyu, and Wei Chen. Combinatorial pure exploration of
multi-armed bandits. In NIPS, 2014.
[8] Wei Chen, Yajun Wang, Yang Yuan, and Qinshi Wang. Combinatorial multi-armed bandit and its extension
to probabilistically triggered arms. Journal of Machine Learning Research, 17(50):1?33, 2016.
[9] Richard Combes, M. Sadegh Talebi, Alexandre Proutiere, and Marc Lelarge. Combinatorial bandits
revisited. In NIPS, 2015.
[10] Aryeh Dvoretzky, Jack Kiefer, and Jacob Wolfowitz. Asymptotic minimax character of the sample
distribution function and of the classical multinomial estimator. The Annals of Mathematical Statistics,
pages 642?669, 1956.
[11] P. C. Fishburn. The foundations of expected utility. Dordrecht: Reidel, 1982.
[12] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with unknown
variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Transactions
on Networking, 20(5):1466?1478, 2012.
[13] Ashish Goel, Sudipto Guha, and Kamesh Munagala. Asking the right questions: Model-driven optimization
using probes. In PODS, pages 203?212. ACM, 2006.
[14] Ashish Goel, Sudipto Guha, and Kamesh Munagala. How to probe for an extreme value. ACM Transactions
on Algorithms (TALG), 7(1):12:1?12:20, 2010.
[15] Aditya Gopalan, Shie Mannor, and Yishay mansour. Thompson sampling for complex online problems. In
ICML, pages 100?108, 2014.
[16] Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Brian Eriksson. Matroid bandits: Fast
combinatorial optimization with learning. In UAI, pages 420?429, 2014.
[17] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesv?ri. Combinatorial cascading bandits. In
NIPS, 2015.
[18] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesv?ri. Tight regret bounds for stochastic
combinatorial semi-bandits. In AISTATS, pages 535?543, 2015.
[19] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6(1):4?22, 1985.
[20] Jian Li and Amol Deshpande. Maximizing expected utility for stochastic combinatorial optimization
problems. In FOCS, pages 797?806, 2011.
[21] Jian Li and Wen Yuan. Stochastic combinatorial optimization via poisson approximation. In STOC, pages
971?980, 2013.
[22] Tian Lin, Bruno Abrahao, Robert Kleinberg, John Lui, and Wei Chen. Combinatorial partial monitoring
game with linear feedback and its applications. In ICML, pages 901?909, 2014.
[23] Tian Lin, Jian Li, and Wei Chen. Stochastic online greedy learning with semi-bandit feedbacks. In NIPS,
2015.
[24] Pascal Massart. The tight constant in the dvoretzky-kiefer-wolfowitz inequality. The Annals of Probability,
pages 1269?1283, 1990.
[25] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for
maximizing submodular set functions ? I. Mathematical Programming, 14(1):265?294, 1978.
[26] Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. In
NIPS, 2008.
[27] Jiajin Yu and Shabbir Ahmed. Maximizing expected utility over a knapsack constraint. Operations
Research Letters, 44(2):180?185, 2016.
9
| 6511 |@word luk:1 exploitation:1 version:2 private:1 polynomial:3 laurence:1 hu:1 d2:2 r:3 jacob:1 profit:1 boundedness:1 liu:2 contains:1 celebrated:1 selecting:1 daniel:1 past:1 existing:4 yajun:1 com:4 discretization:5 si:4 gmail:3 conjunctive:1 attracted:1 john:1 underly:1 numerical:1 partition:1 enables:1 remove:1 update:3 greedy:3 selected:4 item:2 accordingly:1 provides:2 mannor:1 revisited:1 simpler:1 mathematical:2 along:1 constructed:1 aryeh:1 yuan:2 prove:2 consists:2 focs:1 x0:2 expected:30 behavior:5 multi:9 discretized:2 relying:1 armed:9 cardinality:3 increasing:2 spain:1 estimating:2 underlying:3 bounded:3 moreover:1 notation:1 morgenstern:1 proposing:1 finding:1 csaba:2 every:2 ti:3 concave:3 finance:1 demonstrates:1 stick:1 unit:1 grant:3 treat:1 tsinghua:2 nsfc:1 path:4 lugosi:1 might:2 initialization:2 studied:3 eb:3 china:2 collect:2 equivalence:1 tian:3 acknowledgment:1 kveton:4 regret:34 alphabetical:1 empirical:5 significantly:1 confidence:11 word:2 submits:1 shabbir:1 get:1 cannot:2 eriksson:1 risk:7 applying:1 restriction:2 branislav:3 maximizing:8 economics:2 attention:1 independently:1 thompson:2 survey:1 pod:1 pure:1 estimator:1 rule:1 cascading:1 pull:1 handle:1 coordinate:1 annals:2 pt:1 play:6 suppose:1 user:3 exact:2 yishay:1 programming:1 us:1 trick:1 element:1 trend:1 satisfying:1 updating:1 observed:9 disjunctive:1 solved:1 capture:1 worst:3 wang:2 ensures:1 averse:3 counter:1 observes:1 mentioned:2 reward:58 dynamic:1 depend:4 tight:5 cmab:22 matchings:2 selling:1 joint:1 represented:2 jain:1 fast:1 describe:2 outcome:38 choosing:2 dordrecht:1 whose:4 jean:1 larger:3 supplementary:6 cb358700:1 otherwise:1 statistic:1 fischer:1 online:16 sequence:1 triggered:1 analytical:1 propose:3 relevant:1 argmaxs:1 realization:1 achieve:5 weic:1 sudipto:2 constituent:2 transmission:1 requirement:1 neumann:1 produce:1 object:2 help:1 measured:1 ij:4 x0i:1 strong:1 c:1 indicate:1 submodularity:1 stochastic:17 exploration:2 routing:4 munagala:2 material:6 cucb:3 require:3 suffices:1 generalization:1 mab:8 brian:1 summation:2 extension:2 hold:3 lyu:1 matthew:1 tor:1 achieves:2 purpose:1 estimation:3 integrates:1 combinatorial:21 robbins:1 largest:3 tool:1 always:3 super:30 ck:3 avoid:1 probabilistically:1 focus:3 abrahao:1 she:2 bernoulli:1 likelihood:1 contrast:1 adversarial:2 dependent:7 leung:1 entire:2 typically:5 her:3 bandit:24 proutiere:1 selects:3 i1:1 among:5 colt:1 pascal:1 denoted:1 special:1 aware:1 sampling:2 yu:3 look:2 icml:2 np:1 richard:1 few:1 oblivious:1 sta:5 wen:4 simultaneously:1 national:3 individual:1 neutrality:1 microsoft:2 maintain:3 zheng:3 extreme:1 behind:2 tj:3 accurate:1 fu:1 edge:1 partial:2 tuple:1 dkw:1 shorter:1 unless:1 indexed:1 cba00300:1 instance:1 asking:1 marshall:1 cover:1 yoav:1 maximization:7 cost:1 subset:4 entry:1 uniform:1 delay:2 guha:2 dependency:2 chooses:5 referring:1 st:12 siam:1 invoke:3 michael:1 ashish:2 talebi:1 concrete:1 sanjeev:1 again:2 von:1 satisfied:1 cesa:5 choose:1 possibly:1 fishburn:1 stochastically:10 resort:1 li:7 bidder:3 summarized:1 coefficient:2 satisfy:2 eyi:1 audibert:1 depends:7 vi:5 later:1 try:1 lot:1 optimistic:1 observing:1 start:1 complicated:2 contribution:1 yves:1 kiefer:2 efficiently:2 spaced:1 correspond:1 generalize:2 bor:1 lu:1 advertising:1 monitoring:1 executes:1 randomness:1 networking:1 ipco:1 whenever:1 ashkan:3 email:6 definition:1 against:1 lelarge:1 deshpande:2 dm:4 proof:5 di:17 gain:1 newly:1 rega:2 recall:2 knowledge:1 auer:2 back:1 dvoretzky:3 alexandre:1 wei:7 improved:2 qinshi:1 formulation:2 done:1 rahul:1 generality:2 furthermore:2 just:6 sketch:1 receives:1 ei:1 replacing:1 nonlinear:7 combes:1 continuity:1 khanna:1 quality:2 name:1 usage:4 verify:1 true:1 hence:2 i2:1 deal:2 round:32 game:1 eum:7 illustrative:2 cba00301:1 generalized:2 performs:2 auction:3 lattimore:1 jack:1 fi:14 multinomial:1 mt:1 shanghai:1 winner:1 refer:3 multiarmed:2 smoothness:1 rd:23 mathematics:1 hp:1 similarly:1 bruno:1 submodular:3 longer:1 similarity:1 etc:3 base:10 nicolo:3 dominant:9 posterior:1 recent:1 driven:1 scenario:1 route:1 certain:1 store:5 inequality:3 yi:2 preserving:1 herbert:1 ptas:8 additional:1 george:1 goel:3 determine:1 maximize:4 truthful:1 recommended:1 wolfowitz:2 semi:3 ii:3 multiple:1 full:2 reduces:1 ahmed:1 long:1 lin:3 lai:1 equally:1 feasibility:3 calculates:2 variant:2 basic:1 expectation:4 poisson:1 achieved:1 szepesv:2 want:1 interval:2 pinyan:2 jian:5 extra:1 massart:1 subject:2 shie:1 anand:1 bhaskar:1 call:2 near:2 yang:1 revealed:3 enough:1 easy:2 bid:4 independence:3 matroid:1 nonstochastic:2 click:1 reduce:1 idea:2 cn:1 knowing:2 tradeoff:1 texas:1 whether:1 optimism:1 utility:25 suffer:1 peter:2 azin:3 action:3 repeatedly:1 remark:3 covered:1 listed:1 gopalan:1 extensively:2 schapire:1 exist:1 write:1 discrete:3 express:1 group:1 key:3 dominance:1 traced:1 drawn:3 lowering:1 graph:2 asymptotically:1 monotone:1 fraction:1 year:1 letter:1 uncertainty:1 auctioneer:4 almost:2 reasonable:1 family:1 throughout:3 draw:1 decision:1 sadegh:1 bound:39 played:2 nonnegative:2 oracle:14 constraint:8 x2:2 ri:2 generates:1 kleinberg:1 min:3 according:3 combination:1 character:1 making:1 amol:1 pr:2 computationally:1 ln:6 mutually:3 payment:2 turn:2 count:1 know:2 end:5 serf:1 available:1 operation:2 apply:6 observe:1 probe:2 occurrence:1 alternative:1 eydgahi:1 shortly:1 knapsack:4 running:3 include:1 calculating:1 giving:1 pretend:1 k1:2 classical:5 implied:1 objective:3 already:2 question:1 strategy:2 parametric:2 concentration:1 dependence:1 unclear:1 nemhauser:1 thank:1 parametrized:1 kieferwolfowitz:1 me:1 mail:1 valuation:1 collected:1 trivial:1 length:1 setup:1 robert:2 stoc:1 stated:1 reidel:1 policy:1 unknown:5 perform:1 bianchi:5 upper:10 observation:1 finite:6 kamesh:2 mansour:1 arbitrary:3 specified:2 extensive:1 learned:1 barcelona:1 nip:6 address:1 beyond:1 able:1 adversary:1 xm:4 summarize:1 encompasses:1 program:1 max:29 including:4 memory:2 natural:2 rely:1 arm:82 minimax:2 scheme:3 literature:2 nicol:1 asymptotic:2 freund:1 loss:2 fully:1 bear:1 wolsey:1 allocation:3 foundation:3 shouyuan:1 principle:2 systematically:1 playing:4 austin:1 prone:2 supported:5 wireless:3 offline:14 allow:2 pulled:1 understand:1 wide:1 face:1 regard:1 feedback:5 cumulative:10 fb:2 author:2 made:2 jump:1 adaptive:1 historical:1 far:5 transaction:2 sj:1 approximate:1 dlog2:1 monotonicity:3 uai:1 conceptual:1 xi:26 alternatively:1 continuous:3 streeter:2 why:1 channel:2 golovin:2 ignoring:1 complex:1 hoda:1 marc:1 aistats:1 main:2 whole:2 motivation:1 paul:1 allowed:2 fair:1 x1:4 referred:1 gai:1 advertisement:1 theorem:10 bad:3 specific:1 bastien:2 maxi:2 explored:1 x:4 krishnamachari:1 dominates:1 exists:4 essential:1 effectively:2 horizon:8 chen:8 generalizing:1 logarithmic:1 tze:1 bubeck:3 lazy:8 aditya:1 contained:2 doubling:1 recommendation:1 collectively:1 applies:2 springer:1 satisfies:3 relies:2 acm:3 cdf:14 goal:3 formulated:2 king:1 price:2 lipschitz:4 feasible:3 hard:2 mkt:3 change:1 determined:1 specifically:1 talg:1 lui:1 fisher:1 called:3 total:1 isomorphic:1 kxs:1 player:10 ucb:2 select:2 support:6 irwin:1 princeton:2 d1:4 ex:3 |
6,094 | 6,512 | LightRNN: Memory and Computation-Efficient
Recurrent Neural Networks
1
Xiang Li1
Tao Qin2 Jian Yang1 Tie-Yan Liu2
Nanjing University of Science and Technology 2 Microsoft Research Asia
1
implusdream@gmail.com 1 csjyang@njust.edu.cn
2
{taoqin, tie-yan.liu}@microsoft.com
Abstract
Recurrent neural networks (RNNs) have achieved state-of-the-art performances in
many natural language processing tasks, such as language modeling and machine
translation. However, when the vocabulary is large, the RNN model will become
very big (e.g., possibly beyond the memory capacity of a GPU device) and its
training will become very inefficient. In this work, we propose a novel technique to
tackle this challenge. The key idea is to use 2-Component (2C) shared embedding
for word representations. We allocate every word in the vocabulary into a table,
each row of which is associated with a vector, and each column associated with
another vector. Depending on its position in the table, a word is jointly represented
by two components: a row vector and a column vector. Since the words in the
same row share the rowpvector and the words in the same column share the column
vector, we only need 2 |V | vectors to represent a vocabulary of |V | unique words,
which are far less than the |V | vectors required by existing approaches. Based
on the 2-Component shared embedding, we design a new RNN algorithm and
evaluate it using the language modeling task on several benchmark datasets. The
results show that our algorithm significantly reduces the model size and speeds
up the training process, without sacrifice of accuracy (it achieves similar, if not
better, perplexity as compared to state-of-the-art language models). Remarkably,
on the One-Billion-Word benchmark Dataset, our algorithm achieves comparable
perplexity to previous language models, whilst reducing the model size by a factor
of 40-100, and speeding up the training process by a factor of 2. We name our
proposed algorithm LightRNN to reflect its very small model size and very high
training speed.
1
Introduction
Recently recurrent neural networks (RNNs) have been used in many natural language processing
(NLP) tasks, such as language modeling [14], machine translation [23], sentiment analysis [24],
and question answering [26]. A popular RNN architecture is long short-term memory (LSTM)
[8, 11, 22], which can model long-term dependence and resolve the gradient-vanishing problem
by using memory cells and gating functions. With these elements, LSTM RNNs have achieved
state-of-the-art performance in several NLP tasks, although almost learning from scratch.
While RNNs are becoming increasingly popular, they have a known limitation: when applied to
textual corpora with large vocabularies, the size of the model will become very big. For instance,
when using RNNs for language modeling, a word is first mapped from a one-hot vector (whose
dimension is equal to the size of the vocabulary) to an embedding vector by an input-embedding
matrix. Then, to predict the probability of the next word, the top hidden layer is projected by an
output-embedding matrix onto a probability distribution over all the words in the vocabulary. When
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the vocabulary contains tens of millions of unique words, which is very common in Web corpora, the
two embedding matrices will contain tens of billions of elements, making the RNN model too big to
fit into the memory of GPU devices. Take the ClueWeb dataset [19] as an example, whose vocabulary
contains over 10M words. If the embedding vectors are of 1024 dimensions and each dimension is
represented by a 32-bit floating point, the size of the input-embedding matrix will be around 40GB.
Further considering the output-embedding matrix and those weights between hidden layers, the RNN
model will be larger than 80GB, which is far beyond the capability of the best GPU devices on the
market [2]. Even if the memory constraint is not a problem, the computational complexity for training
such a big model will also be too high to afford. In RNN language models, the most time-consuming
operation is to calculate a probability distribution over all the words in the vocabulary, which requires
the multiplication of the output-embedding matrix and the hidden state at each position of a sequence.
According to simple calculations, we can get that it will take tens of years for the best single GPU
today to finish the training of a language model on the ClueWeb dataset. Furthermore, in addition
to the challenges during the training phase, even if we can successfully train such a big model, it is
almost impossible to host it in mobile devices for efficient inferences.
To address the above challenges, in this work, we propose to use 2-Component (2C) shared embedding
for word representations in RNNs. We allocate all the words in the vocabulary into a table, each row
of which is associated with a vector, and each column associated with another vector. Then we use
two components to represent a word depending on its position in the table: the corresponding row
vector and column vector. Since the words in the same row
p share the row vector and the words in the
same column share the column vector, we only need 2 |V | vectors to represent a vocabulary with
|V | unique words, and thus greatly reduce the model size as compared with the vanilla approach that
needs |V | unique vectors. In the meanwhile, due to the reduced model size, the training of the RNN
model can also significantly speed up. We therefore call our proposed new algorithm (LightRNN), to
reflect its very small model size and very high training speed.
A central technical challenge of our approach is how to appropriately allocate the words into the table.
To this end, we propose a bootstrap framework: (1) We first randomly initialize the word allocation
and then train the LightRNN model. (2) We fix the trained embedding vectors (corresponding to the
row and column vectors in the table), and refine the allocation to minimize the training loss, which is
a minimum weight perfect matching problem in graph theory and can be effectively solved. (3) We
repeat the second step until certain stopping criterion is met.
We evaluate LightRNN using the language modeling task on several benchmark datasets. The
experimental results show that LightRNN achieves comparable (if not better) accuracy to state-of-theart language models in terms of perplexity, while reducing the model size by a factor of up to 100
and speeding up the training process by a factor of 2.
Please note that it is desirable to have a highly compact model (without accuracy drop). First, it
makes it possible to put the RNN model into a GPU or even a mobile device. Second, if the training
data is large and one needs to perform distributed data-parallel training, the communication cost for
aggregating the models from local workers will be low. In this way, our approach makes previously
expensive RNN algorithms very economical and scalable, and therefore has its profound impact on
deep learning for NLP tasks.
2
Related work
In the literature of deep learning, there have been several works that try to resolve the problem caused
by the large vocabulary of the text corpus.
Some works focus on reducing the computational complexity of the softmax operation on the outputembedding matrix. In [16, 17], a binary tree is used to represent a hierarchical clustering of words in
the vocabulary. Each leaf node of the tree is associated with a word, and every word has a unique
path from the root to the leaf where it is in. In this way, when calculating the probability of the
next word, one can replace the original |V |-way normalization with a sequence of log |V | binary
normalizations. In [9, 15],
p the words in the vocabulary are organized into a tree with
ptwo layers: the
root node has roughly |V | intermediate nodes, each of which also has roughly |V | leaf nodes.
Each intermediate node represents a cluster of words, and each leaf node represents a word in the
cluster. To calculate the probability of the next word, one first calculates the probability of the cluster
of the word and then the conditional probability of the word given its cluster. Besides, methods based
2
on sampling-based approximations intend to select randomly or heuristically a small subset of the
output layer and estimate the gradient only from those samples, such as importance sampling [3]
and BlackOut [12]. Although these methods can speed up the training process by means of efficient
softmax, they do not reduce the size of the model.
Some other works focus on reducing the model size. Techniques [6, 21] like differentiated softmax
and recurrent projection are employed to reduce the size of the output-embedding matrix. However,
they only slightly compress the model, and the number of parameters is still in the same order of
the vocabulary size. Character-level convolutional filters are used to shrink the size of the inputembedding matrix in [13]. However, it still suffers from the gigantic output-embedding matrix.
Besides, these methods have not addressed the challenge of computational complexity caused by the
time-consuming softmax operations.
As can be seen from the above introductions, no existing work has simultaneously achieved the
significant reduction of both model size and computational complexity. This is exactly the problem
that we will address in this paper.
3
LightRNN
In this section, we introduce our proposed LightRNN algorithm.
3.1
RNN Model with 2-Component Shared Embedding
A key technical innovation in the LightRNN algorithm is its 2-Component shared embedding for
word representations. As shown in Figure 1, we allocate all the words in the vocabulary into a table.
The i-th row of the table is associated with an embedding vector xri and the j-th column of the table
is associated with an embedding vector xcj . Then a word in the i-th row and the j-th column is
represented by two components: xri and xcj . By sharing the embedding vector among words in the
p
same row (and also in the same column), for a vocabulary with |V | words, we only need 2 |V |
unique vectors for the input word embedding. It is the same case for the output word embedding.
Figure 1: An example of the word table
With the 2-Component shared
embedding, we can construct the LightRNN model by
doubling the basic units of a
vanilla RNN model, as shown
in Figure 2. Let n and m
denote the dimension of a
row/column input vector and
that of a hidden state vector
respectively. To compute the
probability distribution of wt ,
we need to use the column
vector xct?1 ? Rn , the row
vector xrt ? Rn , and the hidFigure 2: LightRNN (left) vs. Conventional RNN (right).
den state vector hrt?1 ? Rm .
?
The column and row vectors are from input-embedding matrices X c , X r ? Rn? |V | respectively.
Next two hidden state vectors hct?1 , hrt ? Rm are produced by applying the following recursive
3
operations:
hct?1 = f (W xct?1 + U hrt?1 + b) hrt = f (W xrt + U hct?1 + b).
(1)
In the above function, W ? Rm?n , U ? Rm?m , b ? Rm are parameters of affine transformations,
and f is a nonlinear activation function (e.g., the sigmoid function).
The probability P (wt ) of a word w at position t is determined by its row probability Pr (wt ) and
column probability Pc (wt ):
c
exp(hrt ? yc(w)
)
P
Pc (wt ) =
r
c ,
i?Sc exp(ht ? yi )
r
exp(hct?1 ? yr(w)
)
P
Pr (wt ) =
r
c
i?Sr exp(ht?1 ? yi )
P (wt ) = Pr (wt ) ? Pc (wt ),
(2)
(3)
r
m
where r(w)?is the row index of word w, c(w) is its column
? index, yi ? R is the i-th vector of
m? |V | c
m? |V |
r
m
c
Y ?R
, yi ? R is the i-th vector of Y ? R
, and Sr and Sc denote the set of rows
and columns of the word table respectively. Note that we do not see the t-th word before predicting
it. In Figure 2, given the input column vector xct?1 of the (t ? 1)-th word, we first infer the row
probability Pr (wt ) of the t-th word, and then choose the index of the row with the largest probability
in Pr (wt ) to look up the next input row vector xrt . Similarly, we can then infer the column probability
Pc (wt ) of the t-th word.
We can see that by using Eqn.(3), we effectively reduce the computation
pof the probability of the next
word from a |V |-way normalization (in standard RNN models) to two |V |-way normalizations. To
better understand the reduction of the model size, we compare the key components in a vanilla RNN
model and in our proposed LightRNN model by considering an example with embedding dimension
n = 1024, hidden unit dimension m = 1024 and vocabulary size |V | = 10M. Suppose we use 32-bit
floating point representation for each dimension. The total size of the two embedding matrices X, Y
is (m ? |V | + n ? |V |) ? 4 = 80GB for the vanilla
p RNN model
p and that of the four embedding
matrices X r , X c , Y r , Y c in LightRNN is 2 ? (m ? |V | + n ? |V |) ? 4 ? 50M B. It is clear that
LightRNN shrinks the model size by a significant factor so that it can be easily fit into the memory of
a GPU device or a mobile device.
The cell of hidden state h can be implemented by a LSTM [22] or a gated recurrent unit (GRU) [7],
and our idea works with any kind of recurrent unit. Please note that in LightRNN, the input and
output use different embedding matrices but they share the same word-allocation table.
3.2
Bootstrap for Word Allocation
The LightRNN algorithm described in the previous subsection assumes that there exists a word
allocation table. It remains as a problem how to appropriately generate this table, i.e., how to allocate
the words into appropriate columns and rows. In this subsection, we will discuss on this issue.
Specifically, we propose a bootstrap procedure to iteratively refine word allocation based on the
learned word embedding in the LightRNN model:
(1) For cold start, randomly allocate the words into the table.
(2) Train the input/output embedding vectors in LightRNN based on the given allocation until
convergence. Exit if a stopping criterion (e.g., training time, or perplexity for language modeling)
is met, otherwise go to the next step.
(3) Fixing the embedding vectors learned in the previous step, refine the allocation in the table, to
minimize the loss function over all the words. Go to Step (2).
As can be seen above, the refinement of the word allocation table according to the learned embedding
vectors is a key step in the bootstrap procedure. We will provide more details about it, by taking
language modeling as an example.
The target in language modeling is to minimize the negative log-likelihood of the next word in
a sequence, which is equivalent to optimizing the cross-entropy between the target probability
distribution and the prediction given by the LightRNN model. Given a context with T words, the
4
overall negative log-likelihood can be expressed as follows:
N LL =
T
X
? log P (wt ) =
t=1
T
X
? log Pr (wt ) ? log Pc (wt ).
(4)
t=1
N LL can be expanded with respect to words: N LL =
log-likelihood for a specific word w.
P|V |
w=1
N LLw , where N LLw is the negative
For ease of deduction, we rewrite N LLw as l(w, r(w), c(w)), where (r(w), c(w)) is the position of
word w in the word allocation table. In addition, we use lr (w, r(w)) and lc (w, c(w)) to represent the
row component and column component of l(w, r(w), c(w)) (which we call row loss and column loss
of word w for ease of reference). The relationship between these quantities is
X
N LLw =
? log P (wt ) = l(w, r(w), c(w))
t?Sw
=
X
? log Pr (wt ) +
t?Sw
X
(5)
? log Pc (wt ) = lr (w, r(w)) + lc (w, c(w)),
t?Sw
where Sw is the set of all the positions for the word w in the corpus.
Now we consider adjusting the allocation table to minimize the loss function N LL. For word
w, suppose we plan to move it from the original cell (r(w), c(w)) to another cell (i, j) in the
table. Then we can calculate the row loss lr (w, i) if it is moved to row i while its column and
the allocation of all the other words remain unchanged. We can also calculate the column loss
lc (w, j) in a similar way. Next we define the total loss of this move as l(w, i, j) which is equal to
lr (w, i) + lc (w, j) according to Eqn.(5). The total cost of calculating all l(w, i, j) is O(|V |2 ), by
assuming l(w, i, j) = lr (w, i) + lc (w, j), since we only need to calculate the loss of each word
allocated in every row and column separately. In fact, all lr (w, i) and lc (w, j) have already been
calculated during the forward part of LightRNN training: to predict the next word we need to compute
the scores (i.e., in Eqn.(2), hct?1 ? yir and hrt ? yic for all i) of all the words in the vocabulary for
exp(hct?1 ?yir )
c
r
k (exp(ht?1 ?yk ))
normalization and lr (w, i) is the sum of ? log( P
) over all the appearances of word w
in the training data. After we calculate l(w, i, j) for all possible w, i, j, we can write the reallocation
problem as the following optimization problem:
min
X
a
l(w, i, j)a(w, i, j)
subject to
(w,i,j)
X
(i,j)
a(w, i, j) = 1 ?w ? V,
X
a(w, i, j) = 1 ?i ? Sr , j ? Sc ,
(6)
w
a(w, i, j) ? {0, 1}, ?w ? V, i ? Sr , j ? Sc ,
where a(w, i, j) = 1 means allocating word w to position (i, j) of the table, and Sr and Sc denote
the row set and column set of the table respectively.
By defining a weighted bipartite graph G = (V, E) with V = (V, Sr ? Sc ), in which the weight of the
edge in E connecting a node w ? V and node (i, j) ? Sr ? Sc is l(w, i, j), we will see that the above
optimization problem is equivalent to a standard minimum weight perfect matching problem [18] on
graph G. This problem has been well studied in the literature, and one of the best practical algorithms
for the problem is the minimum cost maximum flow (MCMF) algorithm [1], whose basic idea is
shown in Figure 3. In Figure 3(a), we assign each edge connecting a word node w and a position
node (i, j) with flow capacity 1 and cost l(w, i, j). The remaining edges starting from source (src)
or ending at destination (dst) are all with flow capacity 1 and cost 0. The thick solid lines in Figure
3(a) give an example of the optimal weighted matching solution, while Figure 3(b) illustrates how the
allocation gets updated correspondingly. Since the computational complexity of MCMF is O(|V |3 ),
which is still costly for a large vocabulary, we alternatively leverage a linear time (with respect to |E|)
1
2
2 -approximation algorithm [20] in our experiments whose computational complexity is O(|V | ).
When the number of tokens in the dataset is far larger than the size of the vocabulary (which is the
common case), this complexity can be ignored as compared with the overall complexity of LightRNN
training (which is around O(|V |KT ), where K is the number of epochs in the training process and
T is the total number of tokens in the training data).
5
(a)
(b)
Figure 3: The MCMF algorithm for minimum weight perfect matching
4
Experiments
To test LightRNN, we conducted a set of experiments on the language modeling task.
4.1
Settings
We use perplexity (P P L) as the measure to evaluate the performance of an algorithm for language modeling (the lower, the better), defined as P P L = exp( NTLL ), where T is the number
of tokens in the test set. We used all the linguistic corpora from 2013 ACL Workshop Morphological Language Datasets (ACLW) [4] and the One-Billion-Word Benchmark Dataset (BillionW)
[5] in our experiments. The detailed information of these public datasets is listed in Table 1.
Table 1: Statistics of the datasets
For the ACLW datasets, we kept all the training/validation/test sets exactly the same as those in
Dataset
#Token
Vocabulary Size
[4, 13] by using their processed data 1 . For the BilACLW-Spanish
56M
152K
lionW dataset, since the data2 are unprocessed, we
ACLW-French
57M
137K
processed the data according to the standard proceACLW-English
20M
60K
dure as listed in [5]: We discarded all words with
ACLW-Czech
17M
206K
count below 3 and padded the sentence boundary
ACLW-German
51M
339K
markers <S>,<\S>. Words outside the vocabulary
ACLW-Russian
25M
497K
were
mapped to the <UNK> token. Meanwhile, the
BillionW
799M
793K
partition of training/validation/test sets on BillionW was the same with public settings in [5] for fair comparisons.
We trained LSTM-based LightRNN using stochastic gradient descent with truncated backpropagation
through time [10, 25]. The initial learning rate was 1.0 and then decreased by a ratio of 2 if the
perplexity did not improve on the validation set after a certain number of mini batches. We clipped
the gradients of the parameters such that their norms were bounded by 5.0. We further performed
dropout with probability 0.5 [28]. All the training processes were conducted on one single GPU K20
with 5GB memory.
4.2
Results and Discussions
For the ACLW datasets, we mainly compared LightRNN with two state-of-the-art LSTM RNN algorithms in [13]: one utilizes hierarchical softmax for word prediction (denoted as HSM),
and the other one utilizes hierarchical softmax as well as character-level convolutional filters for
input embedding (denoted as C-HSM). We explored several choices of dimensions of shared
embedding for LightRNN: 200, 600, and 1000. Note that 200 is exactly the word embedding
size of HSM and C-HSM models used in [13]. Since our algorithm significantly reduces the
model size, it allows us to use larger dimensions of embedding vectors while still keeping our
model size very small. Therefore, we also tried 600 and 1000 in LightRNN, and the results
are showed in Table 2. We can see that with larger embedding sizes, LightRNN achieves bet1
2
https://www.dropbox.com/s/m83wwnlz3dw5zhk/large.zip?dl=0
http://tiny.cc/1billionLM
6
Table 3: Runtime comparisons in order to achieve
the HSMs? baseline P P L
ACLW
Method
Runtime(hours) Reallocation/Training
C-HSM[13]
168
?
LightRNN
82
0.19%
BillionW
Method
Runtime(hours) Reallocation/Training
HSM[6]
168
?
LightRNN
70
2.36%
Table 4: Results on BillionW dataset
Method
P P L #param
KN[5]
68
2G
85
1.6G
HSM[6]
B-RNN[12]
68
4.1G
LightRNN
66
41M
KN + HSM[6]
56
?
KN + B-RNN[12] 47
?
43
?
KN + LightRNN
ter accuracy in terms of perplexity. With 1000-dimensional embedding, it achieves the best result while the total model size is still quite small. Thus, we set 1000 as the shared embedding
size while comparing with baselines on all the ACLW datasets in the following experiments.
Table 5 shows the perplexity and model sizes in
Table 2: Test P P L of LightRNN on the ACLW- all the ACLW datasets. As can be seen, LightRNN
French dataset w.r.t. embedding sizes
significantly reduces the model size, while at the
same time outperforms the baselines in terms of
Embedding size
PPL
#param
perplexity. Furthermore, while the model sizes of
200
340
0.9M
the baseline methods increase linearly with respect
600
208
7M
to the vocabulary size, the model size of LightRNN
1000
176
17M
almost keeps constant on the ACLW datasets.
For the BillionW dataset, we mainly compared
with BlackOut for RNN [12] (B-RNN) which achieves the state-of-the-art result by interpolating with
KN (Kneser-Ney) 5-gram. Since the best single model reported in the paper is a 1-layer RNN with
2048-dimenional word embedding, we also used this embedding size for LightRNN. In addition, we
compared with the HSM result reported in [6], which used 1024 dimensions for word embedding, but
still has 40x more parameters than our model. For further comparisons, we also ensembled LightRNN
with the KN 5-gram model. We utilized the KenLM Language Model Toolkit 3 to get the probability
distribution from the KN model with the same vocabulary setting.
The results on BillionW are shown in Table 4. It
is easy to see that LightRNN achieves the lowest
perplexity whilst significantly reducing the model size. For example, it reduces the model size
by a factor of 40 as compared to HSM and by a
factor of 100 as compared to B-RNN. Furthermore, through ensemble with the KN 5-gram
model, LightRNN achieves a perplexity of 43.
In our experiments, the overall training of
LightRNN consisted of several rounds of word
table refinement. In each round, the training
stopped until the perplexity on the validation set
converged. Figure 4 shows how the perplexity
gets improved with respect to the table refinement on one of the ACLW datasets. Based on
our observations, 3-4 rounds of refinements usu-
Figure 4: Perplexity curve on ACLW-French.
ally give satisfactory results.
Table 3 shows the training time of our algorithm in order to achieve the same perplexity as some
baselines on the two datasets. As can be seen, LightRNN saves half of the runtime to achieve the
same perplexity as C-HSM and HSM. This table also shows the time cost of word table refinement in
the whole training process. Obviously, the word reallocation part accounts for very little fraction of
the total training time.
3
http://kheafield.com/code/kenlm/
7
Table 5: P P L results in test set for various linguistic datasets on ACLW datasets. Italic results are
the previous state-of-the-art. #P denotes the number of Parameters.
Method
KN[4]
HSM[13]
C-HSM[13]
LightRNN
Spanish/#P
219/?
186/61M
169/48M
157/18M
P P L on ACLW test
French/#P
English/#P
Czech/#P
243/?
291/?
862/?
202/56M
236/25M
701/83M
190/44M
216/20M
578/64M
176/17M
191/17M
558/18M
German/#P
463/?
347/137M
305/104M
281/18M
Russian/#P
390/?
353/200M
313/152M
288/19M
Figure 5 shows a set of rows in the word allocation table on the BillionW dataset after several rounds
of bootstrap. Surprisingly, our approach could automatically discover the semantic and syntactic
relationship of words in natural languages. For example, the place names are allocated together in
row 832; the expressions about the concept of time are allocated together in row 889; and URLs
are allocated together in row 887. This automatically discovered semantic/syntactic relationship
may explain why LightRNN, with such a small number of parameters, sometimes outperforms those
baselines that assume all the words are independent of each other (i.e., embedding each word as an
independent vector).
Figure 5: Case study of word allocation table
5
Conclusion and future work
In this work, we have proposed a novel algorithm, LightRNN, for natural language processing
tasks. Through the 2-Component shared embedding for word representations, LightRNN achieves
high efficiency in terms of both model size and running time, especially for text corpora with large
vocabularies.
There are many directions to explore in the future. First, we plan to apply LightRNN on even larger
corpora, such as the ClueWeb dataset, for which conventional RNN models cannot be fit into a
modern GPU. Second, we will apply LightRNN to other NLP tasks such as machine translation and
question answering. Third, we will explore k-Component shared embedding (k > 2) and study the
role of k in the tradeoff between efficiency and effectiveness. Fourth, we are cleaning our codes and
will release them soon through CNTK [27].
Acknowledgments
The authors would like to thank the anonymous reviewers for their critical and constructive comments
and suggestions. This work was partially supported by the National Science Fund of China under
Grant Nos. 91420201, 61472187, 61502235, 61233011 and 61373063, the Key Project of Chinese
Ministry of Education under Grant No. 313030, the 973 Program No. 2014CB349303, and Program
for Changjiang Scholars and Innovative Research Team in University. We also would like to thank
Professor Xiaolin Hu from Department of Computer Science and Technology, Tsinghua National
Laboratory for Information Science and Technology (TNList) for giving a lot of wonderful advices.
8
References
[1] Ravindra K Ahuja, Thomas L Magnanti, and James B Orlin. Network flows. Technical report, DTIC
Document, 1988.
[2] Jeremy Appleyard, Tomas Kocisky, and Phil Blunsom. Optimizing performance of recurrent neural
networks on gpus. arXiv preprint arXiv:1604.01946, 2016.
[3] Yoshua Bengio, Jean-S?bastien Sen?cal, et al. Quick training of probabilistic neural nets by importance
sampling. In AISTATS, 2003.
[4] Jan A Botha and Phil Blunsom. Compositional morphology for word representations and language
modelling. arXiv preprint arXiv:1405.4273, 2014.
[5] Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony
Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv
preprint arXiv:1312.3005, 2013.
[6] Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language
models. arXiv preprint arXiv:1512.04906, 2015.
[7] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated
recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[8] Felix A Gers, J?rgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm.
Neural computation, 12(10):2451?2471, 2000.
[9] Joshua Goodman. Classes for fast maximum entropy training. In Acoustics, Speech, and Signal Processing,
2001. Proceedings.(ICASSP?01). 2001 IEEE International Conference on, volume 1, pages 561?564. IEEE,
2001.
[10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[11] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
[12] Shihao Ji, SVN Vishwanathan, Nadathur Satish, Michael J Anderson, and Pradeep Dubey. Blackout:
Speeding up recurrent neural network language models with very large vocabularies. arXiv preprint
arXiv:1511.06909, 2015.
[13] Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models.
arXiv preprint arXiv:1508.06615, 2015.
[14] Tomas Mikolov, Martin Karafi?t, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. Recurrent neural
network based language model. In INTERSPEECH, volume 2, page 3, 2010.
?
y, and Sanjeev Khudanpur. Extensions
[15] Tom?? Mikolov, Stefan Kombrink, Luk?? Burget, Jan Honza Cernock`
of recurrent neural network language model. In Acoustics, Speech and Signal Processing (ICASSP), 2011
IEEE International Conference on, pages 5528?5531. IEEE, 2011.
[16] Andriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. In Advances in
neural information processing systems, pages 1081?1088, 2009.
[17] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Aistats,
volume 5, pages 246?252. Citeseer, 2005.
[18] Christos H Papadimitriou and Kenneth Steiglitz. Combinatorial optimization: algorithms and complexity.
Courier Corporation, 1982.
[19] Jan Pomik?lek, Milos Jakub?cek, and Pavel Rychl`y. Building a 70 billion word corpus of english from
clueweb. In LREC, pages 502?506, 2012.
[20] Robert Preis. Linear time 1/2-approximation algorithm for maximum weighted matching in general graphs.
In STACS 99, pages 259?269. Springer, 1999.
[21] Ha?sim Sak, Andrew Senior, and Fran?oise Beaufays. Long short-term memory based recurrent neural
network architectures for large vocabulary speech recognition. arXiv preprint arXiv:1402.1128, 2014.
[22] Martin Sundermeyer, Ralf Schl?ter, and Hermann Ney. Lstm neural networks for language modeling. In
INTERSPEECH, pages 194?197, 2012.
[23] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In
Advances in neural information processing systems, pages 3104?3112, 2014.
[24] Duyu Tang, Bing Qin, and Ting Liu. Document modeling with gated recurrent neural network for
sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural
Language Processing, pages 1422?1432, 2015.
[25] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE,
78(10):1550?1560, 1990.
[26] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916,
2014.
[27] Dong Yu, Adam Eversole, Mike Seltzer, Kaisheng Yao, Zhiheng Huang, Brian Guenter, Oleksii Kuchaiev,
Yu Zhang, Frank Seide, Huaming Wang, et al. An introduction to computational networks and the
computational network toolkit. Technical report, Technical report, Tech. Rep. MSR, Microsoft Research,
2014, 2014. research. microsoft. com/apps/pubs, 2014.
[28] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
9
| 6512 |@word luk:1 msr:1 norm:1 hu:1 heuristically:1 tried:1 pavel:1 citeseer:1 solid:1 tnlist:1 reduction:2 initial:1 liu:2 contains:2 score:1 pub:1 blackout:3 document:2 outperforms:2 existing:2 com:5 comparing:1 activation:1 gmail:1 njust:1 gpu:8 partition:1 xcj:2 drop:1 fund:1 v:1 half:1 leaf:4 device:7 yr:1 data2:1 vanishing:1 short:3 lr:7 node:10 zhang:1 become:3 profound:1 seide:1 introduce:1 magnanti:1 sacrifice:1 market:1 roughly:2 chelba:1 morphology:1 automatically:2 resolve:2 little:1 param:2 considering:2 spain:1 pof:1 bounded:1 discover:1 project:1 lowest:1 what:1 kind:1 whilst:2 transformation:1 corporation:1 every:3 continual:1 tackle:1 tie:2 exactly:3 runtime:4 rm:5 zaremba:1 unit:4 grant:2 gigantic:1 before:1 felix:1 aggregating:1 local:1 tsinghua:1 path:1 becoming:1 kneser:1 blunsom:2 rnns:6 acl:1 studied:1 china:1 ease:2 unique:6 practical:1 acknowledgment:1 recursive:1 backpropagation:2 bootstrap:5 procedure:2 cold:1 jan:4 rnn:23 yan:2 empirical:2 significantly:5 matching:5 projection:1 word:91 burget:2 courier:1 morin:1 nanjing:1 onto:1 get:4 cannot:1 cal:1 put:1 context:1 impossible:1 applying:1 www:1 conventional:2 equivalent:2 reviewer:1 phil:2 quick:1 go:2 sepp:1 starting:1 tomas:3 k20:1 preis:1 ralf:1 embedding:46 updated:1 target:2 today:1 suppose:2 cleaning:1 element:2 expensive:1 recognition:1 utilized:1 werbos:1 stacs:1 xiaolin:1 mike:2 role:1 yoon:1 preprint:11 solved:1 wang:1 calculate:6 morphological:1 yk:1 src:1 complexity:9 trained:2 rewrite:1 bipartite:1 exit:1 efficiency:2 easily:1 icassp:2 represented:3 various:1 train:3 fast:1 sc:7 outside:1 whose:4 quite:1 larger:5 jean:1 koehn:1 otherwise:1 statistic:1 syntactic:2 jointly:1 obviously:1 sequence:7 net:1 sen:1 propose:4 qin:1 achieve:3 moved:1 billion:5 convergence:1 cluster:4 sutskever:2 generating:1 perfect:3 adam:1 depending:2 recurrent:15 andrew:1 fixing:1 clueweb:4 schl:1 progress:1 sim:1 hrt:6 implemented:1 wonderful:1 met:2 direction:1 kenlm:2 thick:1 hermann:1 filter:2 stochastic:1 dure:1 public:2 seltzer:1 education:1 assign:1 fix:1 ensembled:1 scholar:1 anonymous:1 brian:1 extension:1 around:2 exp:7 predict:2 rgen:2 achieves:9 combinatorial:1 largest:1 successfully:1 weighted:3 stefan:1 mobile:3 linguistic:2 release:1 focus:2 usu:1 modelling:1 likelihood:3 mainly:2 greatly:1 tech:1 baseline:6 kim:1 inference:1 stopping:2 hidden:7 deduction:1 tao:1 llw:4 issue:1 among:1 overall:3 unk:1 denoted:2 classification:1 plan:2 art:6 softmax:6 initialize:1 equal:2 construct:1 aware:1 sampling:3 ciprian:1 represents:2 look:1 yu:2 theart:1 future:2 papadimitriou:1 report:3 yoshua:3 modern:1 randomly:3 simultaneously:1 national:2 floating:2 phase:1 microsoft:4 highly:1 mnih:1 evaluation:1 pradeep:1 pc:6 allocating:1 kt:1 edge:3 worker:1 tree:3 rush:1 stopped:1 instance:1 column:27 modeling:14 measuring:1 cost:6 shihao:1 subset:1 conducted:2 satish:1 too:2 sumit:1 reported:2 kn:9 cho:1 lstm:7 international:2 destination:1 probabilistic:2 dong:1 michael:2 connecting:2 together:3 ilya:2 yao:1 sanjeev:2 reflect:2 central:1 choose:1 possibly:1 huang:1 inefficient:1 chung:1 wojciech:1 account:1 jeremy:1 changjiang:1 caused:2 performed:1 try:1 root:2 lot:1 jason:1 start:1 capability:1 parallel:1 orlin:1 minimize:4 accuracy:4 convolutional:2 ensemble:1 apps:1 produced:1 economical:1 cc:1 converged:1 explain:1 suffers:1 taoqin:1 sharing:1 james:1 associated:7 dataset:12 adjusting:1 popular:2 cntk:1 subsection:2 organized:1 yacine:1 asia:1 tom:1 improved:1 shrink:2 anderson:1 furthermore:3 until:3 eqn:3 ally:1 web:1 nonlinear:1 marker:1 french:4 russian:2 name:2 building:1 contain:1 consisted:1 concept:1 regularization:1 kyunghyun:1 iteratively:1 satisfactory:1 hsm:14 semantic:2 laboratory:1 phillipp:1 round:4 ll:4 during:2 spanish:2 interspeech:2 please:2 criterion:2 guenter:1 zhiheng:1 novel:2 recently:1 common:2 sigmoid:1 ji:1 volume:3 million:1 ptwo:1 significant:2 vanilla:4 similarly:1 grangier:1 language:32 toolkit:2 showed:1 optimizing:2 perplexity:16 schmidhuber:2 certain:2 binary:2 rep:1 yi:4 joshua:1 seen:4 minimum:4 ministry:1 zip:1 employed:1 cummins:1 signal:2 desirable:1 reduces:4 infer:2 technical:5 calculation:1 cross:1 long:4 host:1 impact:1 calculates:1 scalable:2 basic:2 prediction:3 qi:1 arxiv:22 represent:5 normalization:5 sometimes:1 achieved:3 cell:4 hochreiter:1 liu2:1 addition:3 remarkably:1 separately:1 addressed:1 decreased:1 jian:1 source:1 allocated:4 appropriately:2 goodman:1 sr:7 comment:1 subject:1 flow:4 xct:3 effectiveness:1 call:2 chopra:1 leverage:1 ter:2 intermediate:2 bengio:3 easy:1 fit:3 finish:1 li1:1 architecture:2 jernite:1 andriy:1 reduce:4 idea:3 cn:1 tradeoff:1 svn:1 unprocessed:1 expression:1 allocate:6 gb:4 url:1 sentiment:2 sontag:1 speech:3 afford:1 compositional:1 deep:2 ignored:1 clear:1 detailed:1 listed:2 dubey:1 ten:3 processed:2 reduced:1 generate:1 http:3 ravindra:1 write:1 milo:1 sundermeyer:1 ppl:1 key:5 four:1 cek:1 ht:3 kept:1 kenneth:1 graph:4 padded:1 fraction:1 year:1 sum:1 fourth:1 dst:1 clipped:1 almost:3 place:1 utilizes:2 fran:1 comparable:2 bit:2 dropout:1 layer:5 lrec:1 refine:3 constraint:1 vishwanathan:1 alex:1 speed:5 min:1 innovative:1 mikolov:3 expanded:1 martin:2 gpus:1 department:1 according:4 remain:1 slightly:1 increasingly:1 character:3 karafi:1 making:1 quoc:1 den:1 pr:7 thorsten:1 previously:1 remains:1 discus:1 count:1 german:2 bing:1 ge:1 end:1 gulcehre:1 operation:4 hct:6 reallocation:4 apply:2 hierarchical:5 differentiated:1 appropriate:1 sak:1 ney:2 save:1 batch:1 original:2 compress:1 top:1 clustering:1 nlp:4 assumes:1 remaining:1 denotes:1 running:1 thomas:1 sw:4 tony:1 calculating:2 giving:1 brant:1 ting:1 especially:1 chinese:1 unchanged:1 move:2 intend:1 question:2 quantity:1 already:1 strategy:1 costly:1 dependence:1 kaisheng:1 italic:1 antoine:1 gradient:4 thank:2 mapped:2 capacity:3 xrt:3 assuming:1 besides:2 code:2 index:3 relationship:3 mini:1 ratio:1 innovation:1 yir:2 robert:1 frank:1 xri:2 negative:3 design:1 perform:1 gated:3 observation:1 datasets:14 discarded:1 benchmark:5 caglar:1 descent:1 dropbox:1 truncated:1 defining:1 hinton:1 communication:1 team:1 rn:3 discovered:1 auli:1 steiglitz:1 kocisky:1 david:2 nadathur:1 required:1 gru:1 sentence:1 acoustic:2 learned:3 textual:1 czech:2 barcelona:1 hour:2 nip:1 robinson:1 address:2 beyond:2 below:1 yc:1 challenge:5 program:2 memory:11 hot:1 critical:1 natural:5 cernock:2 predicting:1 improve:1 technology:3 speeding:3 text:2 epoch:1 literature:2 multiplication:1 xiang:1 graf:1 loss:9 suggestion:1 limitation:1 allocation:15 geoffrey:1 validation:4 affine:1 tiny:1 share:5 bordes:1 translation:3 row:31 kombrink:1 token:5 repeat:1 surprisingly:1 keeping:1 english:3 soon:1 supported:1 senior:1 understand:1 taking:1 correspondingly:1 lukas:1 distributed:2 boundary:1 dimension:10 vocabulary:29 calculated:1 ending:1 gram:3 curve:1 fred:1 forward:1 author:1 refinement:5 projected:1 far:3 compact:1 beaufays:1 keep:1 corpus:8 consuming:2 alternatively:1 why:1 table:38 yic:1 scratch:1 lek:1 interpolating:1 meanwhile:2 did:1 aistats:2 linearly:1 big:5 whole:1 paul:1 fair:1 advice:1 junyoung:1 ahuja:1 lc:6 christos:1 position:8 gers:1 answering:2 third:1 tang:1 specific:1 bastien:1 gating:1 jakub:1 explored:1 frederic:1 dl:1 exists:1 workshop:1 effectively:2 importance:2 illustrates:1 dtic:1 chen:1 yang1:1 entropy:2 forget:1 appearance:1 explore:2 vinyals:2 expressed:1 khudanpur:2 partially:1 doubling:1 springer:1 weston:1 conditional:1 shared:10 replace:1 professor:1 determined:1 specifically:1 reducing:5 wt:18 total:6 experimental:1 select:1 oise:1 alexander:1 oriol:2 constructive:1 evaluate:3 schuster:1 |
6,095 | 6,513 | Contextual semibandits via supervised learning oracles
Akshay Krishnamurthy?
akshay@cs.umass.edu
?
Alekh Agarwal?
alekha@microsoft.com
College of Information and Computer Sciences
University of Massachusetts, Amherst, MA
Miroslav Dud?k?
mdudik@microsoft.com
?
Microsoft Research
New York, NY
Abstract
We study an online decision making problem where on each round a learner chooses
a list of items based on some side information, receives a scalar feedback value for
each individual item, and a reward that is linearly related to this feedback. These
problems, known as contextual semibandits, arise in crowdsourcing, recommendation, and many other domains. This paper reduces contextual semibandits to
supervised learning, allowing us to leverage powerful supervised learning methods
in this partial-feedback setting. Our first reduction applies when the mapping from
feedback to reward is known and leads to a computationally efficient algorithm
with near-optimal regret. We show that this algorithm outperforms state-of-the-art
approaches on real-world learning-to-rank datasets, demonstrating the advantage of
oracle-based algorithms. Our second reduction applies to the previously unstudied
setting when the linear mapping from feedback to reward is unknown. Our regret
guarantees are superior to prior techniques that ignore the feedback.
1
Introduction
Decision making with partial feedback, motivated by applications including personalized
medicine [21] and content recommendation [16], is receiving increasing attention from the machine learning community. These problems are formally modeled as learning from bandit feedback,
where a learner repeatedly takes an action and observes a reward for the action, with the goal of
maximizing reward. While bandit learning captures many problems of interest, several applications
have additional structure: the action is combinatorial in nature and more detailed feedback is provided.
For example, in internet applications, we often recommend sets of items and record information about
the user?s interaction with each individual item (e.g., click). This additional feedback is unhelpful
unless it relates to the overall reward (e.g., number of clicks), and, as in previous work, we assume a
linear relationship. This interaction is known as the semibandit feedback model.
Typical bandit and semibandit algorithms achieve reward that is competitive with the single best fixed
action, i.e., the best medical treatment or the most popular news article for everyone. This is often
inadequate for recommendation applications: while the most popular articles may get some clicks,
personalizing content to the users is much more effective. A better strategy is therefore to leverage
contextual information to learn a rich policy for selecting actions, and we model this as contextual
semibandits. In this setting, the learner repeatedly observes a context (user features), chooses a
composite action (list of articles), which is an ordered tuple of simple actions, and receives reward for
the composite action (number of clicks), but also feedback about each simple action (click). The goal
of the learner is to find a policy for mapping contexts to composite actions that achieves high reward.
We typically consider policies in a large but constrained class, for example, linear learners or tree
ensembles. Such a class enables us to learn an expressive policy, but introduces a computational
challenge of finding a good policy without direct enumeration. We build on the supervised learning
literature, which has developed fast algorithms for such policy classes, including logistic regression
and SVMs for linear classifiers and boosting for tree ensembles. We access the policy class exclusively
through a supervised learning algorithm, viewed as an oracle.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Algorithm
VCEE (Thm. 1)
?-Greedy (Thm. 3)
Kale et al. [12]
EELS (Thm. 2)
Agarwal et al. [1]
Swaminathan et al. [22]
p
Regret
KLT log N
(LTp
)2/3 (K log N )1/3
KLT log N
2/3
(LT )p
(K log N )1/3
L K L T log N
4/3 2/3
L T (K log N )1/3
Oracle Calls
p
T 3/2 K/(L log N )
1
not oracle-based
p
1
K L T / log N
1
Weights w?
known
known
known
unknown
unknown
unknown
Table 1: Comparison of contextual semibandit algorithms for arbitrary policy classes, assuming all
?
rankings are valid composite actions. The reward is semibandit feedback weighted according
p to w .
?
?
For known weights, we consider w = 1; for unknown weights, we assume kw k2 ? O( L).
In this paper, we develop and evaluate oracle-based algorithms for the contextual semibandits problem.
We make the following contributions:
1. In the more common setting where the linear function relating the semibandit feedback to the
reward is known, we develop a new algorithm, called VCEE, that extends the oracle-based
contextual bandit
algorithm of Agarwal
p
p et al. [1]. We show that VCEE enjoys a regret bound
? KLT log N and O
? L KT log N , depending on the combinatorial structure of
between O
the problem, when there are T rounds of interaction, K simple actions, N policies, and composite
? 3/2 ) calls to
actions have length L.1 VCEE can handle structured action spaces and makes O(T
the supervised learning oracle.
2. We empirically evaluate this algorithm on two large-scale learning-to-rank datasets and compare
with other contextual semibandit approaches. These experiments comprehensively demonstrate
that effective exploration over a rich policy class can lead to significantly better performance than
existing approaches. To our knowledge, this is the first thorough experimental evaluation of not
only oracle-based semibandit methods, but of oracle-based contextual bandits as well.
3. When the linear function relating the feedback to the reward is unknown, we develop a new
algorithm called EELS. Our algorithm first learns the linear function by uniform exploration
and then, adaptively, switches to act according to an empirically optimal policy. We prove an
? (LT )2/3 (K log N )1/3 regret bound by analyzing when to switch. We are not aware of other
O
computationally efficient procedures with a matching or better regret bound for this setting.
See Table 1 for a comparison of our results with existing applicable bounds.
Related work. There is a growing body of work on combinatorial bandit optimization [2, 4] with
considerable attention on semibandit feedback [6, 10, 12, 13, 19]. The majority of this research
focuses on the non-contextual setting with a knownprelationship between semibandit feedback and
? KLT ) regret against the best fixed composite
reward, and a typical algorithm here achieves an O(
action. To our knowledge, only the work of Kale et al. [12] and Qin et al. [19] considers the
contextual setting, again with known
p relationship. The former generalizes the Exp4 algorithm [3]
? KLT ) regret,2 but requires explicit enumeration of the policies.
to semibandits, and achieves O(
The latter generalizes the LinUCB algorithm of Chu et al. [7] to semibandits, assuming that the
simple action feedback is linearly related to the context. This differs from our setting: we make no
assumptions about the simple action feedback. In our experiments, we compare VCEE against this
LinUCB-style algorithm and demonstrate substantial improvements.
We are not aware of attempts to learn a relationship between the overall reward and the feedback on
simple actions as we do with EELS. While EELS uses least squares, as in LinUCB-style approaches,
it does so without assumptions on the semibandit feedback. Crucially, the covariates for its least
squares problem are observed after predicting a composite action and not before, unlike in LinUCB.
Supervised learning oracles have been used as a computational primitive in many settings including
active learning [11], contextual bandits [1, 9, 20, 23], and structured prediction [8].
1
? notation suppressed factors polylogarithmic in K, L, T and log N . We
Throughout the paper, the O(?)
analyze finite policy classes, but our work extends to infinite classes by standard discretization arguments.
2
Kale et al. [12] consider the favorable setting where our bounds match, when uniform exploration is valid.
2
2
Preliminaries
Let X be a space of contexts and A a set of K simple actions. Let ? ? (X ! AL ) be a finite set of
policies, |?| = N , mapping contexts to composite actions. Composite actions, also called rankings,
are tuples of L distinct simple actions. In general, there are K!/(K L)! possible rankings, but
they might not be valid in all contexts. The set of valid rankings for a context x is defined implicitly
through the policy class as {?(x)}?2? .
Let (?) be the set of distributions over policies, and ? (?) be the set of non-negative weight
vectors over policies, summing to at most 1, which we call subdistributions. Let 1(?) be the 0/1
indicator equal to 1 if its argument is true and 0 otherwise.
In stochastic contextual semibandits, there is an unknown distribution D over triples (x, y, ?), where
x is a context, y 2 [0, 1]K is the vector of reward features, with entries indexed by simple actions as
y(a), and ? 2 [ 1, 1] is the reward noise, E[?|x, y] = 0. Given y 2 RK and A = (a1 , . . . , aL ) 2 AL ,
we write y(A) 2 RL for the vector with entries y(a` ). The learner plays a T -round game. In each
round, nature draws (xt , yt , ?t ) ? D and reveals the context xt . The learner selects a valid ranking
PL
At = (at,1 , at,2 , . . . , at,L ) and gets reward rt (At ) = `=1 w`? yt (at,` ) + ?t , where w? 2 RL is a
possibly unknown but fixed weight vector. The learner is shown the reward rt (At ) and the vector of
reward features for the chosen simple actions yt (At ), jointly referred to as semibandit feedback.
The goal is?to achieve
? cumulative reward competitive with all ? 2 ?. For a policy ?, let R(?) :=
E(x,y,?)?D r ?(x) denote its expected reward, and let ? ? := argmax?2? R(?) be the maximizer
of expected reward. We measure performance of an algorithm via cumulative empirical regret,
Regret :=
T
X
rt (? ? (xt ))
(1)
rt (At ).
t=1
The performance of a policy ? is measured by its expected regret, Reg(?) := R(? ? ) R(?).
Example 1. In personalized search, a learning system repeatedly responds to queries with rankings
of search items. This is a contextual semibandit problem where the query and user features form the
context, the simple actions are search items, and the composite actions are their lists. The semibandit
feedback is whether the user clicked on each item, while the reward may be the click-based discounted
cumulative gain (DCG), which is a weighted sum of clicks, with position-dependent weights. We
want to map contexts to rankings to maximize DCG and achieve a low regret.
We assume that our algorithms have access to a supervised learning oracle, also called an argmax
oracle, denoted AMO, that can find a policy with the maximum empirical reward on any appropriate
dataset. Specifically, given a dataset D = {xi , yi , vi }ni=1 of contexts xi , reward feature vectors
yi 2 RK with rewards for all simple actions, and weight vectors vi 2 RL , the oracle computes
AMO(D) := argmax
?2?
n
X
i=1
hvi , yi (?(xi ))i = argmax
?2?
n X
L
X
vi,` yi (?(xi )` ),
(2)
i=1 `=1
where ?(x)` is the `th simple action that policy ? chooses on context x. The oracle is supervised as
it assumes known features yi for all simple actions whereas we only observe them for chosen actions.
This oracle is the structured generalization of the one considered in contextual bandits [1, 9] and can
be implemented by any structured prediction approach such as CRFs [14] or SEARN [8].
Our algorithms choose composite actions by sampling from a distribution, which allows us to use
importance weighting to construct unbiased estimates for the reward features y. If on round t, a
composite action At is chosen with probability Qt (At ), we construct the importance weighted feature
vector y?t with components y?t (a) := yt (a)1(a 2 At )/Qt (a 2 At ), which are unbiased estimators of
yt (a). For a policy ?, we then define empirical estimates of its reward and regret, resp., as
t
?t (?, w) :=
1X
hw, y?i (?(xi ))i
t i=1
and
d t (?, w) := max ?t (? 0 , w)
Reg
0
?
?t (?, w).
d t (?, w? )
By construction, ?t (?, w? ) is an unbiased estimate of the expected reward R(?), but Reg
?
is not an unbiased estimate of the expected regret Reg(?). We use Ex?H [?] to denote empirical
expectation over contexts appearing in the history of interaction H.
3
Algorithm 1 VCEE (Variance-Constrained Explore-Exploit) Algorithm
Require: Allowed failure probability 2 (0, 1).
n
o
p
1: Q0 = 0, the all-zeros vector. H0 = ;. Define: ?t = min 1/2K, ln(16t2 N/ )/(Ktpmin ) .
2: for round t = 1, . . . , T do
? t 1 = Qt 1 + (1 P Qt 1 (?))1? .
3:
Let ?t 1 = argmax?2? ?t 1 (?, w? ) and Q
t 1
?
? ?t 1 (? | xt ) (see Eq. (3)), and observe yt (At ) and rt (At ).
4:
Observe xt 2 X, play At ? Q
t 1
? ?t 1 (a 2 A | xt ) for each a.
5:
Define qt (a) = Q
t 1
6:
Obtain Qt by solving OP with Ht = Ht 1 [ {(xt , yt (At ), qt (At )} and ?t .
7: end for
Semi-bandit Optimization Problem (OP)
With history H and ?
?
c
k1 Regt (?)
0, define b? := kw
and
kw? k22 ?pmin
X
Q(?)b? ? 2KL/pmin
?2?
8? 2 ? :
? x?H
E
"
L
X
`=1
:= 100. Find Q 2
#
1
2KL
?
+ b?
?
Q (?(x)` 2 A | x)
pmin
? (?)
such that:
(4)
(5)
Finally, we introduce projections and smoothing of distributions. For any ? 2 [0, 1/K] and any
subdistribution P 2 ? (?), the smoothed and projected conditional subdistribution P ? (A | x) is
X
P ? (A | x) := (1 K?)
P (?)1(?(x) = A) + K?Ux (A),
(3)
?2?
where Ux is a uniform distribution over a certain subset of valid rankings for context x, designed
to ensure that the probability of choosing each valid simple action is large. By mixing Ux into our
action selection, we limit the variance of reward feature estimates y?. The lower bound on the simple
action probabilities under Ux appears in our analysis as pmin , which is the largest number satisfying
Ux (a 2 A)
pmin /K
for all x and all simple actions a valid for x. Note that pmin = L when there are no restrictions on
the action space as we can take Ux to be the uniform distribution over all rankings and verify that
Ux (a 2 A) = L/K. In the worst case, pmin = 1, since we can always find one valid ranking for
each valid simple action and let Ux be the uniform distribution over this set. Such a ranking can
be found efficiently by a call to AMO for each simple action a, with the dataset of a single point
(x, 1a 2 RK , 1 2 RL ), where 1a (a0 ) = 1(a = a0 ).
3
Semibandits with known weights
We begin with the setting where the weights w? are known, and present an efficient oracle-based
algorithm (VCEE, see Algorithm 1) that generalizes the algorithm of Agarwal et al. [1].
The algorithm, before each round t, constructs a subdistribution Qt 1 2 ? (?), which is used to
? t 1 by placing the missing mass on the maximizer of empirical reward. The
form the distribution Q
? ?t 1 (see
composite action for the context xt is chosen according to the smoothed distribution Q
t 1
Eq. (3)). The subdistribution Qt 1 is any solution to the feasibility problem (OP), which balances
exploration and exploitation via the constraints in Eqs. (4) and (5). Eq. (4) ensures that the distribution
has low empirical regret. Simultaneously, Eq. (5) ensures that the variance of the reward estimates y?
remains sufficiently small for each policy ?, which helps control the deviation between empirical and
expected regret, and implies that Qt 1 has low expected regret. For each ?, the variance constraint is
based on the empirical regret of ?, guaranteeing sufficient exploration amongst all good policies.
OP can be solved efficiently using AMO and a coordinate descent procedure obtained by modifying
the algorithm of Agarwal et al. [1]. While the full algorithm and analysis are deferred to Appendix E,
several key differences between VCEE and the algorithm of Agarwal et al. [1] are worth highlighting.
4
One crucial modification is that the variance constraint in Eq. (5) involves the marginal probabilities
of the simple actions rather than the composite actions as would be the most obvious adaptation
to our setting. This change, based on using the reward estimates y?t for simple actions, leads to
substantially lower variance of reward estimates for all policies and, consequently, an improved regret
bound. Another important modification is the new mixing distribution Ux and the quantity pmin . For
structured composite action spaces, uniform exploration over the valid composite actions may not
provide sufficient coverage of each simple action and may lead to dependence on the composite
action space size, which is exponentially worse than when Ux is used.
The regret guarantee for Algorithm 1 is the following:
Theorem 1. For any
2 (0, 1), with probability at least 1
, VCEE achieves rekw? k22 p
?
gret O kw? k1 L KT log(N/ ) / pmin . Moreover, VCEE can be efficiently implemented with
p
? T 3/2 K / (pmin log(N/ )) calls to a supervised learning oracle AMO.
O
In Table 1, we compare this result to other applicable regret bounds in
pthe most common setting,
? KLT log N ) regret bound,
where w? = 1 and all rankings are valid (pmin = L). VCEE enjoys a O(
which is the best bound amongst oracle-based approaches, representing an exponentially better
L-dependence over the purely bandit feedback variant [1] and a polynomially better T -dependence
over an ?-greedy scheme (see Theorem 3 in Appendix A). This improvement over ?-greedy is also
verified by our experiments. Additionally, our bound matches that of Kale et al. [12], who consider
the harder adversarial setting but give an algorithm that requires an exponentially worse running time,
?(N T ), and cannot be efficiently implemented with an oracle.
Other results address the non-contextual
p setting, where the optimal bounds for both stochastic [13]
and adversarial [2] semibandits are ?( KLT ). Thus, our bound may be optimal when pmin = ?(L).
However, these p
results apply even without requiring
p all rankings to be valid, so they improve on
our bound by a L factor when pmin = 1. This L discrepancy may not be fundamental, but it
seems unavoidable with some degree of uniform exploration, as in all existing contextual bandit
algorithms. A promising avenue to resolve this gap is to extend the work of Neu [18], which gives
high-probability bounds in the noncontextual setting without uniform exploration.
To summarize, our regret bound is similar to existing results on combinatorial (semi)bandits but
represents a significant improvement over existing computationally efficient approaches.
4
Semibandits with unknown weights
We now consider a generalization of the contextual semibandit problem with a new challenge: the
weight vector w? is unknown. This setting is substantially more difficult than the previous one, as it
is no longer clear how to use the semibandit feedback to optimize for the overall reward. Our result
shows that the semibandit feedback can still be used effectively, even when the transformation is
unknown. Throughout, we assume that the true weight vector w? has bounded norm, i.e., kw? k2 ? B.
One restriction required by our analysis is the ability to play any ranking. Thus, all rankings must
be valid in all contexts, which is a natural restriction in domains such as information retrieval and
recommendation. The uniform distribution over all rankings is denoted U .
We propose an algorithm that explores first and then, adaptively, switches to exploitation. In the
exploration phase, we play rankings uniformly at random, with the goal of accumulating enough
information to learn the weight vector w? for effective policy optimization. Exploration lasts for
a variable length of time governed by two parameters n? and ? . The n? parameter controls the
minimum number of rounds of the exploration phase and is O(T 2/3 ), similar to ?-greedy style
schemes [15]. The adaptivity is implemented by the ? parameter, which imposes a lower bound
on the eigenvalues of the 2nd-moment matrix of reward features observed during exploration. As a
result, we only transition to the exploitation phase after this matrix has suitably large eigenvalues.
Since we make no assumptions about the reward features, there is no bound on how many rounds
this may take. This is a departure from previous explore-first schemes, and captures the difficulty of
learning w? when we observe the regression features only after taking an action.
After the exploration phase of t rounds, we perform least-squares regression using the observed
reward features and the rewards to learn an estimate w
? of w? . We use w
? and importance weighted
5
Algorithm 2 EELS (Explore-Exploit Least Squares)
Require: Allowed failure probability 2 (0, 1). Assume
kw? k2 ? B.
p
2/3
1/3
2/3
1: Set n?
T (K ln(N/ )/L) max{1, (B L)
}
2: for t = 1, . . . , n? do
3:
Observe xt , play At ? U (U is uniform over all rankings), observe yt (At ) and rt (At ).
4: end for
P n? P
2
t)
5: Let V? = 2n?1K 2
yt (b) U1(a,b2A
t=1
a,b2A yt (a)
(a,b2At ) .
6: V?
2V? + 3 ln(2/
n )/(2n? ).
o
1/3
7: Set ?
max 6L2 ln(4LT / ), (T V? /B)2/3 (L ln(2/ ))
.
P n?
T
8: Set ?
t=1 yt (At )yt (At ) .
9: while min (?) ? ? do
10:
t
t + 1. Observe xt , play At ? U , observe yt (At ) and rt (At ).
11:
Set ?
? + yt (At )yt (At )T .
12: end while
Pt
13: Estimate weights w
?
? 1 ( i=1 yi (Ai )ri (Ai )) (Least Squares).
14: Optimize policy ?
?
argmax?2? ?t (?, w)
? using importance weighted features.
15: For every remaining round: observe xt , play At = ?
? (xt ).
reward features from the exploration phase to find a policy ?
? with maximum empirical reward,
?t (?, w).
? The remaining rounds comprise the exploitation phase, where we play according to ?
?.
The remaining question is how to set ? , which governs the length of the exploration phase. The
ideal setting uses the unknown parameter V := E(x,y)?D Vara?Unif(A) [y(a)] of the distribution D,
where Unif(A) is the uniform distribution over all simple actions. We form an unbiased estimator V?
of V and derive an upper bound V? . While the optimal ? depends on V , the upper bound V? suffices.
For this algorithm, we prove the following regret bound.
Theorem 2. For any 2 (0, 1) and T K ln(N/ )/ min{L, (BL)2 }, with probability at least 1 ,
? T 2/3 (K log(N/ ))1/3 max{B 1/3 L1/2 , BL1/6 } . EELS can be implemented
EELS has regret O
efficiently with one call to the optimization oracle.
The theorem shows that we can achieve sublinear regret without dependence on the composite action
space size even when the weights are unknown. The
p only applicable alternatives from the literature
are displayed in Table 1, specialized to B = ?( L). First, oracle-based contextual bandits [1]
achieve a better T -dependence, but both the regret and the number of oracle calls grow exponentially
with L. Second, the deviation bound of Swaminathan et al. [22], which exploits the reward structure
but not the semibandit feedback, leads to an algorithm with regret that is polynomially worse in its
dependence on L and B (see Appendix B). This observation is consistent with non-contextual results,
which show that the value of semibandit information is only in L factors [2].
Of course EELS has a sub-optimal dependence on T , although this is the best we are aware of for
a computationally
efficient algorithm in this setting. It is an interesting open question to achieve
p
poly(K, L) T log N regret with unknown weights.
5
Proof sketches
We next sketch the arguments for our theorems. Full proofs are deferred to the appendices.
Proof of Theorem 1: The result generalizes Agarwal et. al [1], and the proof structure is similar. For
the regret bound, we use Eq. (5) to control the deviation of the empirical reward estimates which
d t . A careful inductive argument leads to the following bounds:
make up the empirical regret Reg
d t (?) + c0
Reg(?) ? 2Reg
kw? k22
KL?t
kw? k1
and
? 2
d t (?) ? 2Reg(?) + c0 kw k2 KL?t .
Reg
kw? k1
Here c0 is a universal constant and ?t is defined in the pseudocode. Eq. (4) guarantees low empirical
? ?t t , and the above inequalities also ensure small population regret.
regret when playing according to Q
6
PT
kw? k2
The cumulative regret is bounded by kw? k21 KL t=1 ?t , which grows at the rate given in Theorem 1.
The number of oracle calls is bounded by the analysis of the number of iterations of coordinate
descent used to solve OP, via a potential argument similar to Agarwal et al. [1].
Proof of Theorem 2: We analyze the exploration and exploitation phases individually, and then
optimize n? and ? to balance these
exploration phase, the expected per-round regret
p terms. For the p
can be bounded by either kw? k2 KV or kw? k2 L, but the number of rounds depends on the
minimum eigenvalue min (?), with ? defined in Steps 8 and 11. However, the expected per-round
2nd-moment matrix, E(x,y)?D,A?U [y(A)y(A)T ], has all eigenvalues at least V . Thus, after t rounds,
we expect min (?) tV , so exploration lasts about ? /V rounds, yielding roughly
p
p
?
Exploration Regret ?
? kw? k2 min{ KV , L}.
V
Now our choice of
?
produces a benign dependence on V and yields a T 2/3 bound.
For the exploitation phase, we bound the error between the empirical reward estimates ?t (?, w)
? and
the true reward R(?). Since we know min (?)
? in this phase, we obtain
r
r
p
p
K log N
L
?
Exploitation Regret ? T kw k2
+T
min{ KV , L}.
n?
?
The first term captures the error from using the importance-weighted y? vector, while the second uses
a bound on the error kw
? w? k2 from the analysis of linear regression (assuming min (?)
? ).
This high-level argument ignores several important details. First, we must show that using V? instead
of the optimal choice V in the setting of ? does not affect the regret. Secondly, since the termination
condition for the exploration phase depends on the random variable ?, we must derive a highprobability bound on the number of exploration rounds to control the regret. Obtaining this bound
requires a careful application of the matrix Bernstein inequality to certify that ? has large eigenvalues.
6
Experimental Results
Our experiments compare VCEE with existing alternatives. As VCEE generalizes the algorithm
of Agarwal et al. [1], our experiments also provide insights into oracle-based contextual bandit
approaches and this is the first detailed empirical study of such algorithms. The weight vector w? in
our datasets was known, so we do not evaluate EELS. This section contains a high-level description
of our experimental setup, with details on our implementation, baseline algorithms, and policy classes
deferred to Appendix C. Software is available at http://github.com/akshaykr/oracle_cb.
Data: We used two large-scale learning-to-rank datasets: MSLR [17] and all folds of the Yahoo!
Learning-to-Rank dataset [5]. Both datasets have over 30k unique queries each with a varying number
of documents that are annotated with a relevance in {0, . . . , 4}. Each query-document pair has a
feature vector (d = 136 for MSLR and d = 415 for Yahoo!) that we use to define our policy class.
For MSLR, we choose K = 10 documents per query and set L = 3, while for Yahoo!, we set K = 6
and L = 2. The goal is to maximize the sum of relevances of shown documents (w? = 1) and the
individual relevances are the semibandit feedback. All algorithms make a single pass over the queries.
Algorithms: We compare VCEE, implemented with an epoch schedule for solving OP after 2i/2
rounds (justified by Agarwal et al. [1]), with two baselines. First is the ?-G REEDY approach [15],
with a constant but tuned ?. This algorithm explores uniformly with probability ? and follows the
empirically best policy otherwise. The empirically best policy is updated with the same 2i/2 schedule.
We also compare against a semibandit version of L IN UCB [19]. This algorithm models the semibandit
feedback as linearly related to the query-document features and learns this relationship, while selecting
composite actions using an upper-confidence bound strategy. Specifically, the algorithm maintains a
weight vector ?t 2 Rd formed by solving a ridge regression problem with the semibandit feedback
yt (at,` ) as regression targets. At round t, the algorithm uses document features {xa }a2A and chooses
the L documents with highest xTa ?t + ?xTa ?t 1 xa value. Here, ?t is the feature 2nd-moment matrix
and ? is a tuning parameter. For computational reasons, we only update ?t and ?t every 100 rounds.
Oracle implementation: L IN UCB only works with a linear policy class. VCEE and ?-G REEDY
work with arbitrary classes. Here, we consider three: linear functions and depth-2 and depth-5
7
Dataset: MSLR
Average reward
1.0
Dataset: Yahoo!
3.1
0.8
2.3
0.6
3.0
0.4
2.2
0.2
0.0
0.0
10000 0.2
?-Lin
VC-Lin
2.9
20000
0.430000
0.6
Number of interactions (T)
?-GB2
VC-GB2
10000
?-GB5
0.8
20000
VC-GB5
1.0
30000
LinUCB
Figure 1: Average reward as a function of number of interactions T for VCEE, ?-G REEDY, and
L IN UCB on MSLR (left) and Yahoo (right) learning-to-rank datasets.
gradient boosted regression trees (abbreviated Lin, GB2 and GB5). Both GB classes use 50 trees.
Precise details of how we instantiate the supervised learning oracle can be found in Appendix C.
Parameter tuning: Each
p algorithm has a parameter governing the explore-exploit tradeoff. For
VCEE, we set ?t = c 1/KLT and tune c, in ?-G REEDY we tune ?, and in L IN UCB we tune ?.
We ran each algorithm for 10 repetitions, for each of ten logarithmically spaced parameter values.
Results: In Figure 1, we plot the average reward (cumulative reward up to round t divided by t)
on both datasets. For each t, we use the parameter that achieves the best average reward across the
10 repetitions at that t. Thus for each t, we are showing the performance of each algorithm tuned
to maximize reward over t rounds. We found VCEE was fairly stable to parameter tuning, so for
VC-GB5 we just use one parameter value (c = 0.008) for all t on both datasets. We show confidence
bands at twice the standard error for just L IN UCB and VC-GB5 to simplify the plot.
Qualitatively, both datasets reveal similar phenomena. First, when using the same policy
p class, VCEE
consistently outperforms ?-G REEDY. This agrees with our theory, as VCEE achieves T -type regret,
while a tuned ?-G REEDY achieves at best a T 2/3 rate.
Secondly, if we use a rich policy class, VCEE can significantly improve on L IN UCB, the empirical
state-of-the-art, and one of few practical alternatives to ?-G REEDY. Of course, since ?-G REEDY does
not outperform L IN UCB, the tailored exploration of VCEE is critical. Thus, the combination of
these two properties is key to improved performance on these datasets. VCEE is the only contextual
semibandit algorithm we are aware of that performs adaptive exploration and is agnostic to the policy
representation. Note that L IN UCB is quite effective and outperforms VCEE with a linear class. One
possible explanation for this behavior is that L IN UCB, by directly modeling the reward, searches the
policy space more effectively than VCEE, which uses an approximate oracle implementation.
7
Discussion
This paper develops oracle-based algorithms for contextual semibandits both with known and unknown weights. In both cases, our algorithms achieve the best known regret bounds for computationally efficient procedures. Our empirical evaluation of VCEE, clearly demonstrates the advantage
of sophisticated oracle-based approaches over both parametric approaches and naive exploration.
To our knowledge this is the first detailed empirical evaluation of oracle-based contextual bandit or
semibandit learning. We close with some promising directions for future work:
p
? KLT log N ) regret even with structured action spaces?
1. With known weights, can we obtain O(
This may require a new contextual bandit algorithm that does not use uniform smoothing.
p
2. With unknown weights, can we achieve a T dependence while exploiting semibandit feedback?
Acknowledgements
This work was carried out while AK was at Microsoft Research.
8
References
[1] A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. E. Schapire. Taming the monster: A fast and
simple algorithm for contextual bandits. In ICML, 2014.
[2] J.-Y. Audibert, S. Bubeck, and G. Lugosi. Regret in online combinatorial optimization. Math of OR, 2014.
[3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem.
SIAM Journal on Computing, 2002.
[4] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. JCSS, 2012.
[5] O. Chapelle and Y. Chang. Yahoo! learning to rank challenge overview. In Yahoo! Learning to Rank
Challenge, 2011.
[6] W. Chen, Y. Wang, and Y. Yuan. Combinatorial multi-armed bandit: General framework and applications.
In ICML, 2013.
[7] W. Chu, L. Li, L. Reyzin, and R. E. Schapire. Contextual bandits with linear payoff functions. In AISTATS,
2011.
[8] H. Daum? III, J. Langford, and D. Marcu. Search-based structured prediction. MLJ, 2009.
[9] M. Dud?k, D. Hsu, S. Kale, N. Karampatziakis, J. Langford, L. Reyzin, and T. Zhang. Efficient optimal
learning for contextual bandits. In UAI, 2011.
[10] A. Gy?rgy, T. Linder, G. Lugosi, and G. Ottucs?k. The on-line shortest path problem under partial
monitoring. JMLR, 2007.
[11] D. J. Hsu. Algorithms for active learning. PhD thesis, University of California, San Diego, 2010.
[12] S. Kale, L. Reyzin, and R. E. Schapire. Non-stochastic bandit slate problems. In NIPS, 2010.
[13] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesv?ri. Tight regret bounds for stochastic combinatorial
semi-bandits. In AISTATS, 2015.
[14] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, 2001.
[15] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In
NIPS, 2008.
[16] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article
recommendation. In WWW, 2010.
[17] MSLR. Mslr: Microsoft learning to rank dataset.
projects/mslr/.
http://research.microsoft.com/en-us/
[18] G. Neu. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. In NIPS,
2015.
[19] L. Qin, S. Chen, and X. Zhu. Contextual combinatorial bandit and its application on diversified online
recommendation. In ICDM, 2014.
[20] A. Rakhlin and K. Sridharan. Bistro: An efficient relaxation-based method for contextual bandits. In
ICML, 2016.
[21] J. M. Robins. The analysis of randomized and nonrandomized AIDS treatment trials using a new approach
to causal inference in longitudinal studies. In Health Service Research Methodology: A Focus on AIDS,
1989.
[22] A. Swaminathan, A. Krishnamurthy, A. Agarwal, M. Dud?k, J. Langford, D. Jose, and I. Zitouni. Off-policy
evaluation for slate recommendation. arXiv:1605.04812v2, 2016.
[23] V. Syrgkanis, A. Krishnamurthy, and R. E. Schapire. Efficient algorithms for adversarial contextual
learning. In ICML, 2016.
9
| 6513 |@word trial:1 exploitation:7 version:1 seems:1 norm:1 nd:3 suitably:1 c0:3 unif:2 open:1 termination:1 crucially:1 harder:1 moment:3 reduction:2 contains:1 uma:1 exclusively:1 selecting:2 tuned:3 document:7 longitudinal:1 outperforms:3 existing:6 contextual:33 com:4 discretization:1 chu:3 must:3 benign:1 enables:1 designed:1 plot:2 update:1 greedy:5 instantiate:1 item:7 mccallum:1 record:1 boosting:1 math:1 zhang:2 direct:1 yuan:1 prove:2 introduce:1 expected:9 behavior:1 roughly:1 growing:1 multi:2 discounted:1 resolve:1 enumeration:2 armed:2 increasing:1 clicked:1 provided:1 spain:1 notation:1 begin:1 moreover:1 mass:1 bounded:4 agnostic:1 project:1 substantially:2 developed:1 finding:1 transformation:1 guarantee:3 thorough:1 every:2 act:1 classifier:1 k2:10 demonstrates:1 control:4 medical:1 segmenting:1 before:2 service:1 limit:1 ak:1 analyzing:1 path:1 lugosi:3 might:1 twice:1 unique:1 practical:1 kveton:1 regret:45 differs:1 procedure:3 empirical:17 universal:1 significantly:2 composite:19 matching:1 projection:1 confidence:2 get:2 cannot:1 close:1 selection:1 context:17 accumulating:1 restriction:3 optimize:3 map:1 www:1 yt:16 maximizing:1 crfs:1 kale:7 attention:2 primitive:1 missing:1 syrgkanis:1 estimator:2 insight:1 semibandit:25 population:1 handle:1 crowdsourcing:1 coordinate:2 krishnamurthy:3 updated:1 resp:1 construction:1 play:8 pt:2 user:5 target:1 diego:1 us:5 logarithmically:1 satisfying:1 marcu:1 observed:3 monster:1 jcss:1 solved:1 capture:3 worst:1 wang:1 ensures:2 alekha:1 news:2 highest:1 observes:2 subdistribution:4 substantial:1 ran:1 covariates:1 reward:52 solving:3 tight:1 purely:1 learner:8 slate:2 distinct:1 fast:2 effective:4 query:7 labeling:1 choosing:1 h0:1 quite:1 solve:1 otherwise:2 ability:1 jointly:1 online:3 advantage:2 eigenvalue:5 sequence:1 propose:1 interaction:6 adaptation:1 qin:2 reyzin:3 mixing:2 pthe:1 achieve:8 description:1 kv:3 rgy:1 exploiting:1 produce:1 guaranteeing:1 xta:2 help:1 depending:1 develop:3 derive:2 measured:1 op:6 qt:10 eq:8 implemented:6 c:1 coverage:1 implies:1 involves:1 direction:1 annotated:1 modifying:1 stochastic:5 vc:5 exploration:24 require:3 suffices:1 generalization:2 preliminary:1 secondly:2 pl:1 sufficiently:1 considered:1 mapping:4 achieves:7 hvi:1 favorable:1 applicable:3 combinatorial:9 individually:1 largest:1 agrees:1 repetition:2 weighted:6 clearly:1 always:1 rather:1 boosted:1 varying:1 focus:2 klt:9 improvement:3 consistently:1 rank:8 karampatziakis:1 adversarial:3 baseline:2 inference:1 dependent:1 typically:1 dcg:2 a0:2 bandit:29 selects:1 overall:3 denoted:2 yahoo:7 art:2 constrained:2 smoothing:2 fairly:1 marginal:1 equal:1 aware:4 construct:3 comprise:1 field:1 sampling:1 kw:17 placing:1 represents:1 icml:5 discrepancy:1 future:1 t2:1 recommend:1 simplify:1 develops:1 few:1 wen:1 bl1:1 simultaneously:1 individual:3 argmax:6 phase:12 microsoft:6 attempt:1 interest:1 evaluation:4 deferred:3 introduces:1 yielding:1 kt:2 tuple:1 partial:3 unless:1 tree:4 indexed:1 causal:1 miroslav:1 modeling:1 deviation:3 subset:1 entry:2 uniform:12 inadequate:1 chooses:4 adaptively:2 fundamental:1 amherst:1 explores:2 siam:1 randomized:1 probabilistic:1 off:1 eel:9 receiving:1 again:1 thesis:1 unavoidable:1 cesa:2 choose:2 possibly:1 worse:3 style:3 pmin:13 li:3 potential:1 gy:1 ranking:18 vi:3 depends:3 audibert:1 analyze:2 competitive:2 maintains:1 gret:1 contribution:1 formed:1 square:5 ni:1 variance:6 who:1 efficiently:5 ensemble:2 yield:1 spaced:1 monitoring:1 worth:1 history:2 ashkan:1 neu:2 against:3 failure:2 obvious:1 proof:5 gain:1 hsu:3 dataset:7 treatment:2 massachusetts:1 popular:2 knowledge:3 schedule:2 sophisticated:1 auer:1 mlj:1 appears:1 supervised:11 methodology:1 improved:3 xa:2 governing:1 just:2 swaminathan:3 langford:6 sketch:2 receives:2 expressive:1 maximizer:2 logistic:1 reveal:1 grows:1 k22:3 verify:1 true:3 unbiased:5 requiring:1 former:1 inductive:1 dud:3 q0:1 round:23 game:1 during:1 ridge:1 demonstrate:2 performs:1 l1:1 personalizing:1 superior:1 common:2 specialized:1 pseudocode:1 empirically:4 rl:4 amo:5 overview:1 exponentially:4 extend:1 relating:2 significant:1 multiarmed:1 ai:2 rd:1 tuning:3 chapelle:1 access:2 stable:1 alekh:1 longer:1 certain:1 inequality:2 yi:6 minimum:2 additional:2 maximize:3 shortest:1 semi:3 relates:1 full:2 reduces:1 match:2 retrieval:1 lin:3 divided:1 icdm:1 a1:1 feasibility:1 prediction:3 variant:1 regression:7 expectation:1 arxiv:1 iteration:1 tailored:1 agarwal:12 justified:1 whereas:1 want:1 szepesv:1 grow:1 crucial:1 unlike:1 ltp:1 lafferty:1 sridharan:1 call:8 near:1 leverage:2 ideal:1 bernstein:1 iii:1 enough:1 switch:3 affect:1 nonstochastic:1 click:7 avenue:1 tradeoff:1 whether:1 motivated:1 regt:1 gb:1 york:1 searn:1 repeatedly:3 action:53 detailed:3 clear:1 governs:1 tune:3 ten:1 band:1 svms:1 http:2 schapire:6 outperform:1 certify:1 per:3 write:1 key:2 demonstrating:1 verified:1 ht:2 relaxation:1 sum:2 jose:1 powerful:1 extends:2 throughout:2 draw:1 decision:2 appendix:6 bound:33 internet:1 fold:1 oracle:31 constraint:3 ri:2 software:1 personalized:3 u1:1 argument:6 min:9 structured:7 tv:1 according:5 combination:1 across:1 suppressed:1 making:2 modification:2 computationally:5 ln:6 previously:1 remains:1 abbreviated:1 know:1 end:3 b2a:2 generalizes:5 available:1 apply:1 observe:9 v2:1 appropriate:1 appearing:1 alternative:3 assumes:1 running:1 ensure:2 remaining:3 medicine:1 daum:1 exploit:4 k1:4 build:1 bl:1 question:2 quantity:1 strategy:2 parametric:1 rt:7 dependence:9 responds:1 mslr:8 amongst:2 linucb:5 gradient:1 majority:1 considers:1 reason:1 ottucs:1 assuming:3 length:3 modeled:1 relationship:4 balance:2 difficult:1 setup:1 vara:1 negative:1 implementation:3 policy:38 unknown:16 perform:1 allowing:1 upper:3 bianchi:2 observation:1 datasets:10 finite:2 descent:2 displayed:1 payoff:1 precise:1 smoothed:2 arbitrary:2 thm:3 community:1 pair:1 required:1 kl:5 california:1 polylogarithmic:1 barcelona:1 nip:4 address:1 unhelpful:1 departure:1 challenge:4 summarize:1 including:3 max:4 explanation:1 everyone:1 exp4:1 critical:1 natural:1 difficulty:1 predicting:1 indicator:1 zhu:1 representing:1 scheme:3 improve:2 github:1 carried:1 naive:1 health:1 taming:1 prior:1 literature:2 l2:1 epoch:2 acknowledgement:1 freund:1 expect:1 a2a:1 adaptivity:1 sublinear:1 interesting:1 triple:1 degree:1 sufficient:2 consistent:1 imposes:1 article:4 playing:1 course:2 last:2 enjoys:2 side:2 highprobability:1 comprehensively:1 taking:1 akshay:2 akshaykr:1 feedback:31 depth:2 world:1 valid:14 rich:3 cumulative:5 computes:1 transition:1 ignores:1 qualitatively:1 projected:1 adaptive:1 san:1 zitouni:1 polynomially:2 unstudied:1 approximate:1 ignore:1 implicitly:1 active:2 reveals:1 uai:1 summing:1 tuples:1 xi:5 search:5 robin:1 table:4 additionally:1 nature:2 learn:5 promising:2 obtaining:1 poly:1 domain:2 aistats:2 linearly:3 noise:1 arise:1 allowed:2 bistro:1 body:1 referred:1 en:1 ny:1 aid:2 sub:1 position:1 pereira:1 explicit:1 governed:1 jmlr:1 weighting:1 learns:2 hw:1 rk:3 theorem:8 xt:12 showing:1 k21:1 list:3 rakhlin:1 effectively:2 importance:5 phd:1 gap:1 reedy:8 chen:2 lt:3 explore:5 bubeck:1 highlighting:1 ordered:1 diversified:1 ux:10 scalar:1 recommendation:7 chang:1 applies:2 ma:1 conditional:2 goal:5 viewed:1 consequently:1 careful:2 mdudik:1 content:2 considerable:1 change:1 typical:2 infinite:1 specifically:2 uniformly:2 called:4 pas:1 experimental:3 ucb:9 formally:1 college:1 linder:1 latter:1 relevance:3 evaluate:3 reg:9 phenomenon:1 ex:1 |
6,096 | 6,514 | Stochastic Gradient Richardson-Romberg
Markov Chain Monte Carlo
?
Alain Durmus1 , Umut S?ims?ekli1 , Eric
Moulines2 , Roland Badeau1 , Ga?el Richard1
1: LTCI, CNRS, T?el?ecom ParisTech, Universit?e Paris-Saclay, 75013, Paris, France
?
2: Centre de Math?ematiques Appliqu?ees, UMR 7641, Ecole
Polytechnique, France
Abstract
Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) algorithms have become increasingly popular for Bayesian inference in large-scale applications. Even
though these methods have proved useful in several scenarios, their performance is
often limited by their bias. In this study, we propose a novel sampling algorithm
that aims to reduce the bias of SG-MCMC while keeping the variance at a reasonable level. Our approach is based on a numerical sequence acceleration method,
namely the Richardson-Romberg extrapolation, which simply boils down to running almost the same SG-MCMC algorithm twice in parallel with different step
sizes. We illustrate our framework on the popular Stochastic Gradient Langevin
Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to
as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD). We
provide formal theoretical analysis and show that SGRRLD is asymptotically consistent, satisfies a central limit theorem, and its non-asymptotic bias and the mean
squared-error can be bounded. Our results show that SGRRLD attains higher rates
of convergence than SGLD in both finite-time and asymptotically, and it achieves
the theoretical accuracy of the methods that are based on higher-order integrators.
We support our findings using both synthetic and real data experiments.
1
Introduction
Markov Chain Monte Carlo (MCMC) techniques are one of the most popular family of algorithms in
Bayesian machine learning. Recently, novel MCMC schemes that are based on stochastic optimization have been proposed for scaling up Bayesian inference to large-scale applications. These so-called
Stochastic Gradient MCMC (SG-MCMC) methods provide a fruitful framework for Bayesian inference, well adapted to massively parallel and distributed architecture. In this domain, a first and
important attempt was made by Welling and Teh [1], where the authors combined ideas from the Unadjusted Langevin Algorithm (ULA) [2] and Stochastic Gradient Descent (SGD) [3]. They proposed
a scalable MCMC framework referred to as Stochastic Gradient Langevin Dynamics (SGLD). Unlike
conventional batch MCMC methods, SGLD uses subsamples of the data per iteration similar to SGD.
Several extensions of SGLD have been proposed [4?12]. Recently, in [10] it has been shown that
under certain assumptions and with sufficiently large number of iterations, the bias and the meansquared-error (MSE) of a general class of SG-MCMC methods can be bounded as O(?) and O(? 2 ),
respectively, where ? is the step size of the Euler-Maruyama integrator. The authors have also shown
that these bounds can be improved by making use of higher-order integrators.
In this paper, we propose a novel SG-MCMC algorithm, called Stochastic Gradient RichardsonRomberg Langevin Dynamics (SGRRLD) that aims to reduce the bias of SGLD by applying a
numerical sequence acceleration method, namely the Richardson-Romberg (RR) extrapolation, which
requires running two chains with different step sizes in parallel. While reducing the bias, SGRRLD
also keeps the variance of the estimator at a reasonable level by using correlated Brownian motions.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
We show that the asymptotic bias and variance of SGRRLD can be bounded as O(? 2 ) and O(? 4 ),
respectively. We also show that after K iterations, our algorithm achieves a rate of convergence
for the MSE of order O(K ?4/5 ), whereas this rate for SGLD and its extensions with first-order
integrators is of order O(K ?2/3 ).
Our results show that by only using a first-order numerical integrator, the proposed approach can
achieve the theoretical accuracy of methods that are based on higher-order integrators, such as the
ones given in [10]. This accuracy can be improved even more by applying the RR extrapolation
multiple times in a recursive manner [13]. On the other hand, since the two chains required by the
RR extrapolation can be generated independently, the SGRRLD algorithm is well adapted to parallel
and distributed architectures. It is also worth to note that our technique is quite generic and can be
virtually applied to all the current SG-MCMC algorithms besides SGLD, provided that they satisfy
rather technical weak error and ergodicity conditions.
In order to assess the performance of the proposed method, we conduct several experiments on both
synthetic and real datasets. We first apply our method on a rather simple Gaussian model whose
posterior distribution is analytically available and compare the performance of SGLD and SGRRLD.
In this setting, we also illustrate the generality of our technique by applying the RR extrapolation
on Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) [6]. Then, we apply our method on a
large-scale matrix factorization problem for a movie recommendation task. Numerical experiments
support our theoretical results: our approach achieves improved accuracy over SGLD and SGHMC.
2
2.1
Preliminaries
Stochastic Gradient Langevin Dynamics
In MCMC, one aims at generating samples from a target probability measure ? that is known up to a
multiplicative constant. Assume that ? has a density with respect to the Lebesgue measure that is still
R
?
denoted by ? and given by ? : ? ? e?U (?) / Rd e?U (?) d?? where U : Rd ? R is called the potential
energy function. In practice, directly generating samples from ? turns out to be intractable except
for very few special cases, therefore one often needs to resort to approximate methods. A popular
way to approximately generate samples from ? is based on discretizations of a stochastic differential
equation (SDE) that has ? as an invariant distribution [14]. A common choice is the over-damped
Langevin equation associated with ?, that is the stochastic differential equation (SDE) given by
?
d?t = ??U (?t )dt + 2dBt ,
(1)
where (Bt )t?0 is the standard d-dimensional Brownian motion. Under mild assumptions on U
(cf. [2]), (?t )t?0 is a well defined Markov process which is geometrically ergodic with respect to
?. Therefore, if continuous sample paths from (?t )t?0 could be generated, they could be used as
approximate samples from ?. However, this is not possible and therefore in practice we need to
use a discretization of (1). The most common discretization is the Euler-Maruyama scheme, which
boils down to applying the following update equation iteratively: ?k+1 = ?k ? ?k+1 ?U (?k ) +
?
2?k+1 Zk+1 , for k ? 0 with initial state ?0 . Here, (?k )k?1 is a sequence of non-increasing step
sizes and (Zk )k?1 is a sequence of independent and identically distributed (i.i.d.) d-dimensional
standard normal random variables. This schema is called the Unadjusted Langevin Algorithm (ULA)
[2]. When the sequence of the step sizes (?k )k?0 goes to 0 as k goes to infinity, it has been shown
in [15] and [16] that the empirical distribution of (?k )k?0 weakly converges to ? under certain
assumptions. A central limit theorem for additive functionals has also been obtained in [17] and [16].
In Bayesian machine learning, ? is often chosen as the Bayesian posterior, which imposes the
PN
following form on the potential energy: U (?) = ?( n=1 log p(xn |?) + log p(?)) for all ? ? Rd ,
m
where x ? {xn }N
n=1 is a set of observed i.i.d. data points, belonging to R , for m ? 1, p(xn |?) :
d
?
d
?
R ? R+ is the likelihood function, and p(?) : R ? R+ is the prior distribution. In large scale
settings, N becomes very large and therefore computing ?U can be computationally very demanding,
limiting the applicability of ULA. Inspired by stochastic optimization techniques, in [1], the authors
have proposed replacing the exact gradient ?U with an unbiased estimator and presented the SGLD
algorithm that iteratively applies the following update equation:
p
?k+1 (?k ) + 2?k+1 Zk+1 ,
?k+1 = ?k ? ?k+1 ?U
(2)
2
?k )k?1 is a sequence of i.i.d. unbiased estimators of ?U . In the following, the common
where (?U
?k )k?1 will be denoted by L. A typical choice for the sequence of estimators
distribution of (?U
?k )k?1 of ?U is to randomly draw an i.i.d. sequence of data subsample (Rk )k?1 with Rk ?
(?U
[N ] = {1, . . . , N } having a fixed number of elements |Rk | = B for all k ? 1. Then, set for all
? ? Rd , k ? 1
X
?k (?) = ?[? log p(?) + N
?U
? log p(xi |?)] .
(3)
i?Rk
B
Convergence analysis of SGLD has been studied in [18, 19] and it has been shown in [20] that
for constant step sizes ?k = ? > 0 for all k ? 1, the bias and the MSE of SGLD are of order
O(? + 1/(?K)) and O(? 2 + 1/(?K)), respectively. Recently, it has been shown that these bounds
are also valid in a more general family of SG-MCMC methods [10].
2.2
Richardson-Romberg Extrapolation for SDEs
Richardson-Romberg extrapolation is a well-known method in numerical analysis, which aims to
improve the rate of convergence of a sequence. Talay and Tubaro [21] showed that the rate of
convergence of Monte Carlo estimates on certain SDEs can be radically improved by using an
RR extrapolation that can be described as follows. Let us consider the SDE in (1) and its Euler
discretization with exact gradients and fixed step size, i.e. ?k = ? > 0 for all k ? 1. Under mild
assumptions on U (cf. [22]), the homogeneous Markov chain (?k )k?0 is ergodic with a unique
invariant distribution ?? , which is different from the target distribution ?. However, [21] showed that
for f sufficiently smooth with polynomial growth, there exists a constant
C, which only depends on
R
? and f such that ?? (f ) = ?(f ) + C? + O(? 2 ), where ?(f ) = Rd f (x)?(dx). By exploiting this
result, RR extrapolation suggests considering two different discretizations of the same SDE with
two different step sizes ? and ?/2. Then instead of ?? (f ), if we consider 2??/2 (f ) ? ?? (f ) as the
estimator, we obtain ?(f ) ? (2??/2 (f ) ? ?? (f )) = O(? 2 ). In the case where the sequence (?k )k?0
goes to 0 as k ? +?, it has been observed in [23] that the estimator defined by RR extrapolation
satisfies a CLT. The applications of RR extrapolation to SG-MCMC have not yet been explored.
3
Stochastic Gradient Richardson-Romberg Langevin Dynamics
In this study, we explore the use of RR extrapolation in SG-MCMC algorithms for improving their
rates of convergence. In particular, we focus on the applications of RR extrapolation on the SGLD
estimator and present a novel SG-MCMC algorithm referred to as Stochastic Gradient RichardsonRomberg Langevin Dynamics (SGRRLD).
The proposed algorithm applies RR extrapolation on SGLD by considering two SGLD chains applied
to the SDE (1), with two different sequences of step sizes satisfying the following relation. For the
first chain, we consider a sequence of non-increasing step sizes (?k )k?1 and for the second chain, we
use the sequence of step sizes (?k )k?1 defined by ?2k?1 = ?2k = ?k /2 for k ? 1. These two chains
are started at the same point ?0 ? Rd , and are run accordingly to (2) but the chain with the smallest
step size is run twice more time than the other one. In other words, these two discretizations are run
PK
until the same time horizon k=1 ?k , where K is the number of iterations. Finally, we extrapolate
the two SGLD estimators in order to construct the new one. Each iteration of SGRRLD will consist of
one step of the first SGLD chain with (?k )k?1 and two steps of the second SGLD chain with (?k )k?1 .
(?)
(?/2)
More formally the proposed algorithm is defined by: consider a starting point ?0 = ?0
= ?0
and for k ? 0,
p
(?)
(?)
? (?) ?(?) + 2?k+1 Z (?) ,
Chain 1 :
?k+1 = ?k
? ?k+1 ?U
(4)
k+1 k
k+1
?
??(?/2) = ?(?/2) ? ?k+1 ?U
? (?/2) ?(?/2) + ??k+1 Z (?/2)
2k+1
2k
2k+1
2k+1
2k+1
2
Chain 2 :
(5)
??(?/2) = ?(?/2) ? ?k+1 ?U
? (?/2) ?(?/2) + ??k+1 Z (?/2)
2k+2
2k+1
2k+2
2k+1
2k+2
2
(?/2)
(?)
where (Zk
)k?1 and (Zk )k?1 are two sequences of d-dimensional i.i.d. standard Gaussian
? (?/2) )k?1 , (?U
? (?) )k?1 are two sequences of i.i.d. unbiased estimators
random variables and (?U
k
k
of ?U with the same common distribution L, meaning that the mini-batch size has to be the same.
3
For a test function f : Rd ? R, we then define the estimator of ?(f ) based on RR extrapolation as
follows: (for all K ? N? )
!?1 K
K+1
h
i
X
X
(?/2)
(?/2)
(?)
R
?
?K
(f ) =
?k
?k+1 {f (?2k?1 ) + f (?2k )} ? f (?k ) ,
(6)
k=2
k=1
We provide a pseudo-code of SGRRLD in the supplementary document.
Under mild assumptions on ?U and the law L (see the conditions in the Supplement), by [19,
R
Theorem 7] we can show that ?
?K
(f ) is a consistent estimator of ?(f ): when limk?+? ?k = 0 and
PK
R
limK?+? k=1 ?k+1 = +?, then limK?+? ?
?K
(f ) = ?(f ) almost surely. However, it is not
immediately clear whether applying an RR extrapolation would provide any advantage over SGLD
in terms of the rate of convergence. Even if RR extrapolation were to reduce the bias of the SGLD
estimator, this improvement could be offset by an increase of variace. In the context of a general
class of SDEs, in [13] it has been shown that the variance of estimator based on RR extrapolation can
be controlled by using correlated Brownian increments and the best choice in this sense is in fact
(?/2)
(?)
taking the two sequences (Zk
)k?1 and (Zk )k?1 perfectly correlated, i.e. for all k ? 1,
?
(?)
(?/2)
(?/2)
Zk = (Z2k?1 + Z2k )/ 2 .
(7)
This choice has also been justified in the context of the sampling of the stationary distribution of a
diffusion in [23] through a central limit theorem.
Inspired by [23], in order to be able to control the variance of the SGRRLD estimator, we consider
correlated Brownian increments. In particular, we assume that the Brownian increments in (4)
and (5) satisfy the following relationship: there exist a matrix ? ? Rd?d , a sequence (Wk )k?1
(?/2)
of d dimensional i.i.d. standard Gaussian random variables, independent of (Zk
)k?1 such that
Id ??> ? is a positive semidefinite matrix and for all k ? 0,
?
(?)
(?/2)
(?/2)
Zk+1 = ?> (Z2k+1 + Z2(k+1) )/ 2 + (Id ??> ?)1/2 Wk+1 ,
(8)
where Id denotes the identity matrix. In Section 4, we will show that the properly scaled SGRRLD
estimator converges to a Gaussian random variable whose variance is minimal when ? = Id, and
(?)
therefore Zk+1 should be chosen as in (7). Accordingly, (8) justifies the choice of using the same
Brownian motion in the two discretizations, extending the results of [23] to SG-MCMC. On the other
hand, regarding the sequences of estimators for ?U , we assume that they can also be correlated
but do not assume an explicit form on their relation. However, it is important to note that if the
? (?/2) )k?1 and (?U
? (?) )k?1 do not have the same common distribution, then the
two sequences (?U
k
k
SGRRLD estimator can have a bias, which would have the same order as of vanilla SGLD (with the
same sequence of step sizes). In the particular case of (3), in order for SGRRLD to gain efficiency
compared to SGLD, the mini-batch size has to be the same for the two chains.
4
Convergence Analysis
We analyze asymptotic and non-asymptotic properties of SGRRLD. In order to save space and avoid
obscuring the results, we present the technical conditions under which the theorems hold, and the full
proofs in the supplementary document.
R
We first present a central limit theorem for the estimator ?
?K
(f ) of ?(f ) (see (6)) for a smooth
PK
(n)
(1)
n
function f . Let us define ?K = k=1 ?k+1 and ?K = ?K , for all n ? N.
Theorem 1. Let f : Rd ? R be a smooth function and (?k )k?1 be a nonincreasing sequence
(?) (?/2)
satisfying limk?+? ?k = 0 and limK?+? ?K = +?. Let (?k , ?k
)k?0 be defined by (4)d
(5), started at ?0 ? R and assume that the relation (8) holds for ? ? Rd?d . Under appropriate
conditions on U , f and L, then the following statements hold:
?
(3) ?
R
a) If limK?+? ?K / ?K = 0, then ?K ?
?K
(f ) ? ?(f ) converges in law as K goes to infinity
2
to a zero-mean Gaussian random variable with variance ?R
, which is minimized when ? = Id.
?
?
(3)
R
b) If limK?+? ?K / ?K = ? ? (0, +?), then ?K ?
?K
(f ) ? ?(f ) converges in law as K
2
goes to infinity to a Gaussian random variable with variance ?R
and mean ? ?R .
4
(3) ?
(3)
R
c) If limK?+? ?K / ?K = +?, then (?K /?k ) ?
?K
(f ) ? ?(f ) converges in probability as
K goes to infinity to ?R .
2
The expressions of ?R
and ?R are given in the supplementary document.
Proof (Sketch). The proof follows the same strategy as the one in [23, Theorem 4.3] for ULA.
We assume that the Poisson equation associated with f has a solution g ? C 9 (Rd ). Then, the
(?)
(?/2)
(?)
proof consists in making a 7th order Taylor expansion for g(?k+1 ), g(?2k ) and g(?2k+1 ) at
(?)
(?/2)
(?/2)
?k , ?2k?1 and ?2k
R
, respectively. Then ?
?K
(f ) ? ?(f ) is decomposed as a sum of three terms
1/2
A1,K + A2,K + A3,K . A1,K is the fluctuation term and ?K A1,K converges to a zero-mean Gaussian
(3)
2
random variable with variance ?R
. A2,K is the bias term, and ?K A2,K /?K converges in probability
(3)
1/2
to ?R as K goes to +? if limK?+? ?K = +?. Finally the last term ?K A3,K goes to 0 as K
goes to +?. The detailed proof is given in the supplementary document.
These results state that the Gaussian noise dominates the stochastic gradient noise. Moreover, we
(?)
also observe that the correlation between the two sequences of Gaussian random variables (Zk )k?1
(?/2)
and (Zk
)k?1 has an important impact on the asymptotic convergence of ?
? R (f ), whereas the
correlation of the two sequences of stochastic gradients does not.
A typical choice of decreasing sequence (?k )k?1 is of the form ?k = ?1 k ?? for ? ? (0, 1]. With
such a choice, Theorem 1 states that ?
? R (f ) converges to ?(f ) at a rate of convergence of order
?((1??)/2)?(2?)
O(K
), where a ? b = min(a, b). Therefore, the optimal choice for the exponent ?
for obtaining the fastest convergence turns out to be ? = 1/5, which implies a rate of convergence of
order O(K ?2/5 ). Note that this rate is higher than SGLD whose optimal rate is of order O(K ?1/3 ).
Besides, ? = 1/5 corresponds to the second point of Theorem 1, in which there is an equal
contribution of the bias and the fluctuation at an asymptotic level. Futher discussions and detailed
calculations can be found in the supplementary document.
We now derive non-asymptotic bounds for the bias and the MSE of the estimator ?
? R (f ).
Theorem 2. Let f : Rd ? R be a smooth function and (?k )k?1 be a nonincreasing sequence such
(?) (?/2)
that there exists K1 ? 1, ?K1 ? 1 and limK?+? ?K = +?. Let (?k , ?k
)k?0 be defined by
d
(4)-(5), started at ?0 ? R . Under appropriate conditions on U , f and L, then there exists C ? 0
such that for all K ? N, K ? 1:
n
o
R
(3)
E ?
BIAS:
?K (f ) ? ?(f ) ? (C/?K ) ?K + 1
h
2 i
(3)
R
MSE:
E ?
?K
(f ) ? ?(f )
? C{(?K /?K )2 + 1/?K } .
Proof (Sketch). The proof follows the same strategy as the one of Theorem 1, but instead of establishing the exact convergence of the fluctuation and the bias terms, we just give an upper bound for
these two terms. The detailed proof is given in the supplementary document.
It is important to observe that the constant C which appears in Theorem 2 depends on moments of
the estimator of the gradient. For fixed step size ?k = ? for all k ? 1, Theorem 2 shows that the
bias is of order O(? 2 + 1/(K?)). Therefore, if the number of iterations K is fixed then the choice
of ? which minimizes this bound is ? ? K ?1/3 , obtained by differentiating x 7? x2 + (xK)?1 .
Choosing this value for ? leads to the optimal rate for the bias of order O(K ?2/3 ). Note that this
bound is better than SGLD for which the optimal bound of the bias at fixed K is of order O(K ?1/2 ).
The same approach can be applied to the MSE which is of order O(? 4 + 1/(K?)). Then, the optimal
choice of the step size is ? = O(K ?1/5 ), leading to a bound of order O(K ?4/5 ). Similar to the
previous case, this bound is smaller than the bound obtained with SGLD, which is O(K ?2/3 ).
If we choose ?k = ?1 k ?? for ? ? (0, 1], Theorem 2 shows that the bias and the MSE go to 0 as
K goes to infinity. More precisely for ? ? (0, 1), the bound for the bias is O(K ?(2?)?(1??) ),
and is therefore minimal for ? = 1/3. As for the MSE, the bound provided by Theorem 2
is O(K ?(4?)?(1??) ) which is consistent with Theorem 1, leading to an optimal bound of order
O(K ?4/5 ) as ? = 1/5.
5
7
True
SGLD
SGRRLD
6
10-4
4
MSE
p(?|x)
5
3
10-5
10-6
2
1
0
1.8
2
2.2
10-7
2.4
SGLD
SGRRLD
1
5
10
20
Dimension (d)
?
(a)
(b)
Figure 1: The performance of SGRRLD on synthetic data. (a) The true posterior and the estimated
posteriors. (b) The MSE for different problem sizes.
5
5.1
Experiments
Linear Gaussian Model
We conduct our first set of experiments on synthetic data where we consider a simple Gaussian model
whose posterior distribution is analytically available. The model is given as follows:
? ? N (0, ??2 Id) ,
2
xn |? ? N (a>
n ?, ?x ) , for all n .
(9)
N ?d
Here, we assume that the explanatory variables {an }N
, ??2 and ?x2 are known and we
n=1 ? R
aim to draw samples from the posterior distribution p(?|x). In all the experiments, we first randomly
generate an ? N (0, 0.5 Id) and we generate the true ? and the response variables x by using the
generative model given in (9). All our experiments are conducted on a standard laptop computer
with 2.5GHz Quad-core Intel Core i7 CPU, and in all settings, the two chains of SGRRLD are run in
parallel.
In our first experiment, we set d = 1, ??2 = 10, ?x2 = 1, N = 1000, and the size of each minibatch
B = N/10. We fix the step size to ? = 10?3 . In order to ensure that both algorithms are run for a
fixed computation time, we run SGLD for K = 21000 iterations where we discard the first 1000
samples as burn-in, and we run SGRRLD for K = 10500 iterations accordingly, where we discard
the samples generated in the first 500 iterations as burn-in. Figure 1(a) shows the typical results
of this experiment. In particular, in the left figure, we illustrate the true posterior distribution and
2
2
the Gaussian density N (?
?post , ?
?post
) for both algorithms, where ?
?post and ?
?post
denote the empirical
posterior mean and variance, respectively. In the right figure, we monitor the bias of the estimated
variance as a function of computation time. The results show that SGLD overestimates the posterior
variance, whereas SGRRLD is able to reduce this error significantly. We also observe that the results
support our theory: the bias of the estimated variance is ? 10?2 for SGLD whereas this bias is
reduced to ? 10?4 with SGRRLD.
Bias
MSE
In our second experiment, we
fix ? and K and monitor the
10-2
10-4
SGLD
SGLD
MSE of the posterior covariance
SGRRLD
SGRRLD
10-5
as a function of the dimension
10-3
d of the problem. In order to
10-6
measure the MSE, we compute
-4
10
10-7
the squared Frobenius norm of
10-8
the difference between the true
10-5
posterior covariance and the es10-6
10-5
10-4
10-3
10-6
10-5
10-4
10-3
Step size (?)
Step size (?)
timated covariance. Similarly to
the previous experiment, we average 100 runs that are initial- Figure 2: Bias and MSE of SGLD and SGRRLD for different step
ized randomly. The results are sizes.
shown in Figure 1(b). The results clearly show that SGRRLD provides significant performance improvement over SGLD, where
the MSE of SGRRLD is in the order of the square of the MSE of SGLD for all values of d.
In our next experiment, we use the same setting as in the first experiment and we monitor the bias
and the MSE of the estimated variance as a function of the step size ?. For evaluation, we average
100 runs that are initialized randomly. As depicted in Figure 2, the results show that SGRRLD yields
6
significantly better results than SGLD in terms of both the bias and MSE. Note that for very small ?,
the bias and MSE increase. This is due to the term 1/(K?) in the bounds of Theorem 2 dominates
both the bias and the MSE as expected since K is fixed. Therefore, we observe a drop in the bias and
the MSE as we increase ? up to ? 8 ? 10?5 , and then they gradually increase along with ?.
We conduct the next experiment
in order to check the rate of convergence that we have derived
in Theorem 2 for fixed step size
?k = ? for all k ? 1. We observe that the optimal choice for
the step size is of the form ? =
?
K ?0.2 for
?b? K ?1/3 and ? = ?M
the bias and MSE, respectively.
To confirm our findings, we first
need to determine the constants
?
?b? and ?M
, which can be done Figure 3: Bias and MSE of SGRRLD with different rates for step
by using the results from the pre- size (?).
vious experiment. Accordingly,
we observe that ?b? ? 8.5 ? 10?5 ?
?
(20000)1/3 ? 2 ? 10?3 and ?M
? 1.7 ? 10?4 ? (20000)0.2 ? 10?3 . Then, to confirm the right dependency of ? on K, we fix K = 106 and monitor the bias with the sequence of step sizes ? = ?b? K ??
and the MSE with ? = ?M K ?? for several values of ? as given in Figure 3. It can be observed that
the optimal convergence rate is still obtained for ? = 1/3 for the bias and ? = 0.2 for the MSE,
which confirms the results of Theorem 2. For a decreasing sequence of step sizes ?k = ?1? k ? for
? ? (0, 1], we conduct a similar experiment to confirm that the best convergence rate is achieved
choosing ? = 1/3 in the case of the bias and ? = 0.2 in the case of the MSE. The resulting figures
can be found in the supplementary document.
Bias
MSE
In our last synthetic data experiment, instead of SGLD, we conSGHMC
SGHMC
sider another SG-MCMC algo10-2
SGHMC-s
SGHMC-s
10-4
SGRRHMC
SGRRHMC
rithm, namely the Stochastic Gra-3
10
-5
10
dient Hamiltonian Monte Carlo
10-4
(SGHMC) [6]. We apply the pro10-6
posed extrapolation scheme de10-5
10-7
scribed in Section 3 to SGHMC
-6
10 -4
10-8 -4
and call the resulting algorithm
10
10-3
10-2
10
10-3
10-2
Step size (?)
Step size (?)
Stochastic Gradient RichardsonRomberg Hamiltonian Monte
Carlo (SGRRHMC). In this exFigure 4: The performance of RR extrapolation on SGHMC.
periment, we use the same setting as we use in Figure 2, and
we monitor the bias and the MSE of the estimated variance as a function of ?. We compare SGRRHMC against SGHMC with Euler discretization [6] and SGHMC with an higher-order splitting
integrator (SGHMC-s) [10] (we describe SGHMC, SGHMC-s, and SGRRHMC in more detail in the
supplementary document). We average 100 runs that are initialized randomly. As given in Figure 4,
the results are similar to the ones obtained in Figure 2: for large enough ?, SGRRHMC yields
significantly better results than SGHMC. For small ?, the term 1/(K?) in the bound derived in
Theorem 2 dominates the MSE and therefore SGRRHMC requires a larger K for improving over
SGHMC. For large enough values of ?, we observe that SGRRHMC obtains an MSE similar to that
of SGHMC-s with small ?, which confirms our claim that the proposed approach can achieve the
accuracy of the methods that are based on higher-order integrators.
5.2
Large-Scale Matrix Factorization
In our second set of experiments, we evaluate our approach on a large-scale matrix factorization
problem for a link prediction application, where we consider
the following
probabilistic model:
P
2
2
Wip ? N (0, ?w
), Hpj ? N (0, ?h2 ), Xij |W, H ? N
W
H
,
?
,
where
X ? RI?J is the
ip
pj
x
p
observed data matrix with missing entries, and W ? RI?P and H ? RD?P are the latent factors,
7
(a) MovieLens-1Million
(b) MovieLens-10Million
(c) MovieLens-20Million
Figure 5: The performance of SGRRLD on large-scale matrix factorization problems.
whose entries are i.i.d. distributed. The aim in this application is to predict the missing values of
X by using a low-rank approximation. This model is similar to the Bayesian probabilistic matrix
factorization model [24] and it is often used in large-scale matrix factorization problems [25], in
which SG-MCMC has been shown to outperform optimization methods such as SGD [26].
In this experiment, we compare SGRRLD against SGLD on three large movie ratings datasets, namely
the MovieLens 1Million (ML-1M), MovieLens 10Million (ML-10M), and MovieLens 20Million
(ML-20M) (grouplens.org). The ML-1M dataset contains about 1 million ratings applied to
I = 3883 movies by J = 6040 users, resulting in a sparse observed matrix X with 4.3% non-zero
entries. The ML-10M dataset contains about 10 million ratings applied to I = 10681 movies by
J = 71567 users, resulting in a sparse observed matrix X with 1.3% non-zero entries. Finally, The
ML-20M dataset contains about 20 million ratings applied to I = 27278 movies by J = 138493
users, resulting in a sparse observed matrix X with 0.5% non-zero entries. We randomly select 10%
of the data as the test set and use the remaining data for generating the samples. The rank of the
2
factorization is chosen as P = 10. We set ?w
= ?h2 = ?x2 = 1. For all datasets, we use a constant
step size. We run SGLD for K = 10500 iterations where we discard the first 500 samples as burn-in.
In order to keep the computation time the same, we run SGRRLD for K = 5250 iterations where
we discard the first 250 iterations as burn-in. For ML-1M we set ? = 2 ? 10?6 and for ML-10M
and ML-20M we set ? = 2 ? 10?5 . The size of the subsamples B is selected as N/10, N/50,
and N/500 for ML-1M, ML-10M and ML-20M, respectively. We have implemented SGLD and
SGRRLD in C by using the GNU Scientific Library for efficient matrix computations. We fully
exploit the inherently parallel structure of SGRRLD by running the two chains in parallel as two
independent processes, whereas SGLD cannot benefit from this parallel computation architecture due
to its inherently sequential nature. Therefore their wall-clock times are nearly exactly the same.
Figure 5 shows the comparison of SGLD and SGRRLD in terms of the root mean squared-errors
(RMSE) that are obtained on the test sets as a function of wall-clock time. The results clearly show
that in all datasets SGRRLD yields significant performance improvements. We observe that in the
ML-1M experiment SGRRLD requires only ? 200 seconds for achieving the accuracy that SGLD
provides after ? 400 seconds. We see similar behaviors in the ML-10M and ML-20M experiments:
SGRRLD appears to be more efficient than SGLD. The results indicate that by using our approach, we
either obtain the same accuracy of SGLD in shorter time or we obtain a better accuracy by spending
the same amount of time as SGLD.
6
Conclusion
We presented SGRRLD, a novel scalable sampling algorithm that aims to reduce the bias of SGMCMC while keeping the variance at a reasonable level by using RR extrapolation. We provided
formal theoretical analysis and showed that SGRRLD is asymptotically consistent and satisfies a
central limit theorem. We further derived bounds for its non-asymptotic bias and the mean squarederror, and showed that SGRRLD attains higher rates of convergence than all known SG-MCMC
methods with first-order integrators in both finite-time and asymptotically. We supported our findings
using both synthetic and real data experiments, where SGRRLD appeared to be more efficient than
SGLD in terms of computation time on a large-scale matrix factorization application. As a next step,
we plan to explore the use of the multi-level Monte Carlo approaches [27] in our framework.
Acknowledgements: This work is partly supported by the French National Research Agency (ANR)
as a part of the EDISON 3D project (ANR-13-CORD-0008-02).
8
References
[1] M. Welling and Y. W Teh, ?Bayesian learning via Stochastic Gradient Langevin Dynamics,? in ICML,
2011, pp. 681?688.
[2] G. O. Roberts and R. L. Tweedie, ?Exponential convergence of Langevin distributions and their discrete
approximations,? Bernoulli, vol. 2, no. 4, pp. 341?363, 1996.
[3] H. Robbins and S. Monro, ?A stochastic approximation method,? Ann. Math. Statist., vol. 22, no. 3, pp.
400?407, 1951.
[4] S. Ahn, A. Korattikara, and M. Welling, ?Bayesian posterior sampling via stochastic gradient Fisher
scoring,? in ICML, 2012.
[5] S. Patterson and Y. W. Teh, ?Stochastic gradient Riemannian Langevin dynamics on the probability
simplex,? in NIPS, 2013.
[6] T. Chen, E. B. Fox, and C. Guestrin, ?Stochastic gradient Hamiltonian Monte Carlo,? in ICML, 2014.
[7] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven, ?Bayesian sampling using stochastic
gradient thermostats,? in NIPS, 2014, pp. 3203?3211.
[8] X. Shang, Z. Zhu, B. Leimkuhler, and A. J. Storkey, ?Covariance-controlled adaptive Langevin thermostat
for large-scale Bayesian sampling,? in NIPS, 2015, pp. 37?45.
[9] Y. A. Ma, T. Chen, and E. Fox, ?A complete recipe for stochastic gradient MCMC,? in NIPS, 2015, pp.
2899?2907.
[10] C. Chen, N. Ding, and L. Carin, ?On the convergence of stochastic gradient MCMC algorithms with
high-order integrators,? in NIPS, 2015, pp. 2269?2277.
[11] C. Li, C. Chen, D. Carlson, and L. Carin, ?Preconditioned stochastic gradient Langevin dynamics for deep
neural networks,? in AAAI Conference on Artificial Intelligence, 2016.
[12] U. S?ims?ekli, R. Badeau, A. T. Cemgil, and G. Richard, ?Stochastic quasi-Newton Langevin Monte Carlo,?
in ICML, 2016.
[13] G. Pages, ?Multi-step Richardson-Romberg extrapolation: remarks on variance control and complexity,?
Monte Carlo Methods and Applications, vol. 13, no. 1, pp. 37, 2007.
[14] U. Grenander, ?Tutorial in pattern theory,? Division of Applied Mathematics, Brown University, Providence, 1983.
[15] D. Lamberton and G. Pag`es, ?Recursive computation of the invariant distribution of a diffusion: the case
of a weakly mean reverting drift,? Stoch. Dyn., vol. 3, no. 4, pp. 435?451, 2003.
[16] V. Lemaire, Estimation de la mesure invariante d?un processus de diffusion, Ph.D. thesis, Universit?e
Paris-Est, 2005.
[17] D. Lamberton and G. Pag`es, ?Recursive computation of the invariant distribution of a diffusion,? Bernoulli,
vol. 8, no. 3, pp. 367?405, 2002.
[18] I. Sato and H. Nakagawa, ?Approximation analysis of stochastic gradient Langevin dynamics by using
Fokker-Planck equation and Ito process,? in ICML, 2014, pp. 982?990.
[19] Y. W. Teh, A. H. Thi?ery, and S. J. Vollmer, ?Consistency and fluctuations for stochastic gradient Langevin
dynamics,? Journal of Machine Learning Research, vol. 17, no. 7, pp. 1?33, 2016.
[20] Y. W. Teh, S. J. Vollmer, and K. C. Zygalakis, ?(Non-)asymptotic properties of Stochastic Gradient
Langevin Dynamics,? arXiv preprint arXiv:1501.00438, 2015.
[21] D. Talay and L. Tubaro, ?Expansion of the global error for numerical schemes solving stochastic
differential equations,? Stochastic Anal. Appl., vol. 8, no. 4, pp. 483?509 (1991), 1990.
[22] J. C. Mattingly, A. M. Stuart, and D. J. Higham, ?Ergodicity for SDEs and approximations: locally
Lipschitz vector fields and degenerate noise,? Stochastic Process. Appl., vol. 101, no. 2, pp. 185?232,
2002.
[23] V. Lemaire, G. Pag`es, and F. Panloup, ?Invariant measure of duplicated diffusions and application to
Richardson?Romberg extrapolation,? Ann. Inst. H. Poincar?e Probab. Statist., vol. 51, no. 4, pp. 1562?1596,
11 2015.
[24] R. Salakhutdinov and A. Mnih, ?Bayesian probabilistic matrix factorization using Markov Chain Monte
Carlo,? in ICML, 2008, pp. 880?887.
[25] R. Gemulla, E. Nijkamp, Haas. P. J., and Y. Sismanis, ?Large-scale matrix factorization with distributed
stochastic gradient descent,? in ACM SIGKDD, 2011.
[26] S. Ahn, A. Korattikara, N. Liu, S. Rajan, and M. Welling, ?Large-scale distributed Bayesian matrix
factorization using stochastic gradient MCMC,? in KDD, 2015.
[27] V. Lemaire and G. Pages,
?Multilevel Richardson-Romberg extrapolation,?
arXiv preprint
arXiv:1401.1177, 2014.
9
| 6514 |@word mild:3 polynomial:1 norm:1 confirms:2 covariance:4 sgd:3 moment:1 initial:2 liu:1 contains:3 ecole:1 document:8 current:1 discretization:4 z2:1 yet:1 dx:1 additive:1 numerical:6 kdd:1 sdes:4 drop:1 update:2 stationary:1 generative:1 selected:1 intelligence:1 accordingly:4 xk:1 hamiltonian:4 core:2 provides:2 math:2 org:1 along:1 become:1 differential:3 consists:1 manner:1 expected:1 behavior:1 multi:2 integrator:10 inspired:2 salakhutdinov:1 decomposed:1 decreasing:2 cpu:1 quad:1 considering:2 increasing:2 becomes:1 spain:1 provided:3 bounded:3 moreover:1 project:1 laptop:1 sde:5 minimizes:1 finding:3 pseudo:1 growth:1 exactly:1 universit:2 scaled:1 control:2 planck:1 overestimate:1 positive:1 cemgil:1 limit:5 sismanis:1 id:7 establishing:1 path:1 fluctuation:4 approximately:1 burn:4 umr:1 twice:2 studied:1 suggests:1 appl:2 fastest:1 limited:1 factorization:11 scribed:1 unique:1 recursive:3 practice:2 poincar:1 thi:1 discretizations:4 empirical:2 vious:1 significantly:3 word:1 pre:1 sider:1 dbt:1 leimkuhler:1 cannot:1 ga:1 romberg:10 context:2 applying:5 fruitful:1 conventional:1 missing:2 go:11 starting:1 independently:1 processus:1 ergodic:2 splitting:1 immediately:1 estimator:20 fang:1 increment:3 limiting:1 target:2 user:3 exact:3 homogeneous:1 us:1 vollmer:2 element:1 storkey:1 satisfying:2 observed:7 preprint:2 ding:2 cord:1 agency:1 complexity:1 dynamic:13 weakly:2 solving:1 patterson:1 division:1 eric:1 efficiency:1 describe:1 monte:12 artificial:1 choosing:2 quite:1 whose:5 supplementary:8 posed:1 larger:1 anr:2 richardson:10 ip:1 subsamples:2 sequence:27 rr:17 advantage:1 grenander:1 propose:3 korattikara:2 degenerate:1 achieve:2 frobenius:1 recipe:1 exploiting:1 convergence:19 extending:1 generating:3 converges:8 illustrate:3 derive:1 implemented:1 implies:1 indicate:1 stochastic:38 multilevel:1 fix:3 wall:2 preliminary:1 extension:2 hold:3 sufficiently:2 normal:1 sgld:48 predict:1 claim:1 achieves:3 smallest:1 a2:3 estimation:1 grouplens:1 robbins:1 clearly:2 gaussian:12 aim:7 rather:2 pn:1 avoid:1 ekli:1 derived:3 focus:1 stoch:1 improvement:3 properly:1 rank:2 likelihood:1 check:1 bernoulli:2 sigkdd:1 attains:2 sense:1 lemaire:3 inst:1 inference:3 dient:1 el:2 cnrs:1 neven:1 bt:1 explanatory:1 relation:3 mattingly:1 quasi:1 france:2 sgmcmc:1 appliqu:1 denoted:2 exponent:1 plan:1 special:1 equal:1 construct:1 field:1 having:1 sampling:6 stuart:1 icml:6 nearly:1 carin:2 minimized:1 simplex:1 richard:1 few:1 randomly:6 national:1 lebesgue:1 attempt:1 ltci:1 mnih:1 unadjusted:2 evaluation:1 semidefinite:1 dyn:1 damped:1 nonincreasing:2 chain:19 shorter:1 tweedie:1 fox:2 conduct:4 taylor:1 initialized:2 theoretical:5 minimal:2 applicability:1 zygalakis:1 entry:5 euler:4 z2k:3 conducted:1 dependency:1 providence:1 synthetic:6 combined:1 density:2 probabilistic:3 squared:3 central:5 aaai:1 thesis:1 choose:1 resort:1 mesure:1 leading:2 li:1 potential:2 de:3 gemulla:1 wk:2 satisfy:2 depends:2 multiplicative:1 root:1 extrapolation:24 schema:1 analyze:1 parallel:8 ery:1 rmse:1 nijkamp:1 monro:1 contribution:1 ass:1 square:1 accuracy:8 variance:17 yield:3 weak:1 bayesian:13 carlo:12 worth:1 against:2 energy:2 pp:16 associated:2 proof:8 riemannian:1 boil:2 gain:1 maruyama:2 proved:1 dataset:3 popular:4 ula:4 duplicated:1 appears:2 higher:8 dt:1 response:1 improved:4 done:1 though:1 generality:1 ergodicity:2 just:1 until:1 correlation:2 hand:2 sketch:2 clock:2 replacing:1 minibatch:1 french:1 scientific:1 brown:1 unbiased:3 true:5 analytically:2 iteratively:2 hpj:1 complete:1 polytechnique:1 motion:3 meaning:1 spending:1 novel:6 recently:3 common:5 million:9 ims:2 significant:2 rd:13 vanilla:1 consistency:1 mathematics:1 similarly:1 centre:1 badeau:1 ahn:2 brownian:6 posterior:12 showed:4 discard:4 scenario:1 massively:1 certain:3 scoring:1 guestrin:1 surely:1 determine:1 clt:1 multiple:1 full:1 smooth:4 technical:2 calculation:1 post:4 roland:1 a1:3 controlled:2 impact:1 prediction:1 scalable:2 poisson:1 arxiv:4 iteration:12 achieved:1 justified:1 whereas:5 unlike:1 limk:10 virtually:1 call:1 ee:1 identically:1 enough:2 architecture:3 perfectly:1 reduce:5 idea:1 regarding:1 i7:1 whether:1 expression:1 remark:1 deep:1 useful:1 clear:1 detailed:3 amount:1 locally:1 statist:2 ph:1 reduced:1 generate:3 outperform:1 exist:1 xij:1 tutorial:1 estimated:5 per:1 discrete:1 vol:9 rajan:1 monitor:5 achieving:1 pj:1 diffusion:5 asymptotically:4 geometrically:1 sum:1 run:12 gra:1 almost:2 reasonable:3 family:2 draw:2 scaling:1 bound:16 gnu:1 sato:1 adapted:2 infinity:5 precisely:1 x2:4 ri:2 min:1 belonging:1 smaller:1 increasingly:1 making:2 invariant:5 gradually:1 ecom:1 equation:8 computationally:1 turn:2 reverting:1 available:2 obscuring:1 sghmc:16 apply:3 observe:8 generic:1 appropriate:2 save:1 batch:3 ematiques:1 denotes:1 running:3 cf:2 ensure:1 remaining:1 newton:1 carlson:1 exploit:1 k1:2 strategy:2 gradient:31 link:1 haas:1 preconditioned:1 besides:2 code:1 relationship:1 mini:2 robert:1 statement:1 ized:1 anal:1 teh:5 upper:1 markov:6 datasets:4 finite:2 descent:2 langevin:19 drift:1 rating:4 squarederror:1 namely:4 paris:3 required:1 meansquared:1 timated:1 barcelona:1 nip:6 able:2 pattern:1 appeared:1 saclay:1 demanding:1 zhu:1 scheme:4 improve:1 movie:5 library:1 started:3 prior:1 sg:16 acknowledgement:1 probab:1 asymptotic:9 law:3 fully:1 h2:2 consistent:4 imposes:1 supported:2 last:2 keeping:2 alain:1 bias:39 formal:2 taking:1 pag:3 differentiating:1 sparse:3 distributed:6 ghz:1 benefit:1 dimension:2 xn:4 valid:1 skeel:1 author:3 made:1 adaptive:1 welling:4 functionals:1 approximate:2 obtains:1 keep:2 umut:1 confirm:3 ml:15 global:1 edison:1 xi:1 talay:2 continuous:1 latent:1 un:1 nature:1 zk:13 correlated:5 inherently:2 obtaining:1 improving:2 expansion:2 mse:30 domain:1 pk:3 subsample:1 noise:3 periment:1 referred:3 intel:1 rithm:1 explicit:1 exponential:1 ito:1 down:2 theorem:22 rk:4 wip:1 explored:1 offset:1 a3:2 dominates:3 intractable:1 exists:3 consist:1 thermostat:2 sequential:1 higham:1 supplement:1 babbush:1 justifies:1 horizon:1 chen:5 depicted:1 simply:1 explore:2 recommendation:1 applies:2 futher:1 radically:1 corresponds:1 satisfies:3 fokker:1 acm:1 ma:1 identity:1 acceleration:2 ann:2 lipschitz:1 fisher:1 paristech:1 typical:3 except:1 reducing:1 movielens:6 nakagawa:1 shang:1 called:4 partly:1 e:3 la:1 est:1 formally:1 select:1 support:3 evaluate:1 mcmc:25 extrapolate:1 |
6,097 | 6,515 | Riemannian SVRG: Fast Stochastic Optimization on
Riemannian Manifolds
Hongyi Zhang
Sashank J. Reddi
Suvrit Sra
MIT
Carnegie Mellon University
MIT
Abstract
We study optimization of finite sums of geodesically smooth functions on Riemannian manifolds. Although variance reduction techniques for optimizing finite-sums
have witnessed tremendous attention in the recent years, existing work is limited to vector space problems. We introduce Riemannian SVRG (R SVRG), a new
variance reduced Riemannian optimization method. We analyze R SVRG for both
geodesically convex and nonconvex (smooth) functions. Our analysis reveals that
R SVRG inherits advantages of the usual SVRG method, but with factors depending
on curvature of the manifold that influence its convergence. To our knowledge,
R SVRG is the first provably fast stochastic Riemannian method. Moreover, our
paper presents the first non-asymptotic complexity analysis (novel even for the
batch setting) for nonconvex Riemannian optimization. Our results have several
implications; for instance, they offer a Riemannian perspective on variance reduced
PCA, which promises a short, transparent convergence analysis.
1
Introduction
We study the following rich class of (possibly nonconvex) finite-sum optimization problems:
n
min
x2X ?M
f (x) ,
1X
fi (x),
n i=1
(1)
where (M, g) is a Riemannian manifold with the Riemannian metric g, and X ? M is a geodesically
convex set. We assume that each fi : M ! R is geodesically L-smooth (see ?2). Problem (1)
generalizes the fundamental machine learning problem of empirical risk minimization, which is
usually cast in vector spaces, to a Riemannian setting. It also includes as special cases important
problems such as principal component analysis (PCA), independent component analysis (ICA),
dictionary learning, mixture modeling, among others (see e.g., the related work section).
The Euclidean version of (1) where M = Rd and g is the Euclidean inner-product has been the subject
of intense algorithmic development in machine learning and optimization, starting with the classical
work of Robbins and Monro [26] to the recent spate of work on variance reduction [10; 18; 20; 25; 28].
However, when (M, g) is a nonlinear Riemannian manifold, much less is known beyond [7; 38].
When solving problems with manifold constraints, one common approach is to alternate between
optimizing in the ambient Euclidean space and ?projecting? onto the manifold. For example, two
well-known methods to compute the leading eigenvector of symmetric matrices, power iteration and
Oja?s algorithm [23], are in essence projected gradient and projected stochastic gradient algorithms.
For certain manifolds (e.g., positive definite matrices), projections can be quite expensive to compute.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
An effective alternative is to use Riemannian optimization1 , which directly operates on the manifold
in question. This mode of operation allows Riemannian optimization to view the constrained
optimization problem (1) as an unconstrained problem on a manifold, and thus, to be ?projection-free.?
More important is its conceptual value: viewing a problem through the Riemannian lens, one can
discover insights into problem geometry, which can translate into better optimization algorithms.
Although the Riemannian approach is appealing, our knowledge of it is fairly limited. In particular,
there is little analysis about its global complexity (a.k.a. non-asymptotic convergence rate), in part
due to the difficulty posed by the nonlinear metric. Only very recently Zhang and Sra [38] developed
the first global complexity analysis of batch and stochastic gradient methods for geodesically convex
functions. However, the batch and stochastic gradient methods in [38] suffer from problems similar to
their vector space counterparts. For solving finite sum problems with n components, the full-gradient
method requires n derivatives at each step; the stochastic method requires only one derivative but at
the expense of slower O(1/?2 ) convergence to an ?-accurate solution.
These issues have motivated much of the recent progress on faster stochastic optimization in vector
spaces by using variance reduction [10; 18; 28] techniques. However, all ensuing methods critically
rely on properties of vector spaces, whereby, adapting them to work in the context of Riemannian
manifolds poses major challenges. Given the richness of Riemannian optimization (it includes
vector space optimization as a special case) and its growing number of applications, developing fast
stochastic Riemannian optimization is important. It will help us apply Riemannian optimization to
large-scale problems, while offering a new set of algorithmic tools for the practitioner?s repertoire.
Contributions. We summarize the key contributions of this paper below.
? We introduce Riemannian SVRG (R SVRG), a variance reduced Riemannian stochastic gradient
method based on SVRG [18]. We analyze R SVRG for geodesically strongly convex functions
through a novel theoretical analysis that accounts for the nonlinear (curved) geometry of the
manifold to yield linear convergence rates.
? Building on recent advances in variance reduction for nonconvex optimization [3; 25], we generalize the convergence analysis of R SVRG to (geodesically) nonconvex functions and also to
gradient dominated functions (see ?2 for the definition). Our analysis provides the first stochastic
Riemannian method that is provably superior to both batch and stochastic (Riemannian) gradient
methods for nonconvex finite-sum problems.
? Using a Riemannian formulation and applying our result for (geodesically) gradient-dominated
functions, we provide new insights, and a short transparent analysis explaining fast convergence of
variance reduced PCA for computing the leading eigenvector of a symmetric matrix.
To our knowledge, this paper provides the first stochastic gradient method with global linear convergence rates for geodesically strongly convex functions, as well as the first non-asymptotic convergence
rates for geodesically nonconvex optimization (even in the batch case). Our analysis reveals how
manifold geometry, in particular curvature, impacts convergence rates. We illustrate the benefits of
R SVRG by showing an application to computing leading eigenvectors of a symmetric matrix and to
the task of computing the Riemannian centroid of covariance matrices, a problem that has received
great attention in the literature [5; 16; 38].
Related Work. Variance reduction techniques, such as control variates, are widely used in Monte
Carlo simulations [27]. In linear spaces, variance reduced methods for solving finite-sum problems
have recently witnessed a huge surge of interest [e.g. 4; 10; 14; 18; 20; 28; 36]. They have been shown
to accelerate stochastic optimization for strongly convex objectives, convex objectives, nonconvex
fi (i 2 [n]), and even when both f and fi (i 2 [n]) are nonconvex [3; 25]. Reddi et al. [25] further
proved global linear convergence for gradient dominated nonconvex problems. Our analysis is
inspired by [18; 25], but applies to the substantially more general Riemannian optimization setting.
References of Riemannian optimization can be found in [1; 33], where analysis is limited to asymptotic convergence (except [33, Theorem 4.2] which proved linear rate convergence for first-order line
search method with bounded and positive definite hessian). Stochastic Riemannian optimization has
1
Riemannian optimization is optimization on a known manifold structure. Note the distinction from manifold learning, which attempts to learn a manifold structure from data. We briefly review some Riemannian
optimization applications in the related work.
2
been previously considered in [7; 21], though with only asymptotic convergence analysis, and without
any rates. Many applications of Riemannian optimization are known, including matrix factorization
on fixed-rank manifold [32; 34], dictionary learning [8; 31], optimization under orthogonality constraints [11; 22], covariance estimation [35], learning elliptical distributions [30; 39], and Gaussian
mixture models [15]. Notably, some nonconvex Euclidean problems are geodesically convex, for
which Riemannian optimization can provide similar guarantees to convex optimization. Zhang and
Sra [38] provide the first global complexity analysis for first-order Riemannian algorithms, but their
analysis is restricted to geodesically convex problems with full or stochastic gradients. In contrast,
we propose R SVRG, a variance reduced Riemannian stochastic gradient algorithm, and analyze its
global complexity for both geodesically convex and nonconvex problems.
In parallel with our work, [19] also proposed and analyzed R SVRG specifically for the Grassmann
manifold. Their complexity analysis is restricted to local convergence to strict local minima, which
essentially corresponds to our analysis of (locally) geodesically strongly convex functions.
2
Preliminaries
Before formally discussing Riemannian optimization, let us recall some foundational concepts of
Riemannian geometry. For a thorough review one can refer to any classic text, e.g.,[24].
A Riemannian manifold (M, g) is a real smooth manifold M equipped with a Riemannain metric
g. The metric g induces an inner product structure in each tangent space Tx M associated with
every x 2 M. We denote the inner product of u, v 2 Tx M as hu, vi , gx (u, v); and the norm
p
hu,vi
of u 2 Tx M is defined as kuk , gx (u, u). The angle between u, v is defined as arccos kukkvk
.
A geodesic is a constant speed curve : [0, 1] ! M that is locally distance minimizing. An
exponential map Expx : Tx M ! M maps v in Tx M to y on M, such that there is a geodesic
d
with (0) = x, (1) = y and ? (0) , dt
(0) = v. If between any two points in X ? M there is a
unique geodesic, the exponential map has an inverse Expx 1 : X ! Tx M and the geodesic is the
unique shortest path with kExpx 1 (y)k = kExpy 1 (x)k the geodesic distance between x, y 2 X .
Parallel transport yx : Tx M ! Ty M maps a vector v 2 Tx M to yx v 2 Ty M, while preserving
norm, and roughly speaking, ?direction,? analogous to translation in Rd . A tangent vector of a
geodesic remains tangent if parallel transported along . Parallel transport preserves inner products.
v
x
v
Expx (v)
x
y
y
xv
Figure 1: Illustration of manifold operations. (Left) A vector v in Tx M is mapped to Expx (v); (right) A vector
v in Tx M is parallel transported to Ty M as
y
x v.
The geometry of a Riemannian manifold is determined by its Riemannian metric tensor through
various characterization of curvatures. Let u, v 2 Tx M be linearly independent, so that they span
a two dimensional subspace of Tx M. Under the exponential map, this subspace is mapped to a
two dimensional submanifold of U ? M. The sectional curvature ?(x, U ) is defined as the Gauss
curvature of U at x. As we will mainly analyze manifold trigonometry, for worst-case analysis, it is
sufficient to consider sectional curvature.
Function Classes. We now define some key terms. A set X is called geodesically convex if for any
x, y 2 X , there is a geodesic with (0) = x, (1) = y and (t) 2 X for t 2 [0, 1]. Throughout the
paper, we assume that the function f in (1) is defined on a geodesically convex set X on a Riemannian
manifold M.
We call a function f : X ! R geodesically convex (g-convex) if for any x, y 2 X and any geodesic
such that (0) = x, (1) = y and (t) 2 X for t 2 [0, 1], it holds that
f ( (t)) ? (1
t)f (x) + tf (y).
3
It can be shown that if the inverse exponential map is well-defined, an equivalent definition is that for
any x, y 2 X , f (y) f (x) + hgx , Expx 1 (y)i, where gx is a subgradient of f at x (or the gradient
if f is differentiable). A function f : X ! R is called geodesically ?-strongly convex (?-strongly
g-convex) if for any x, y 2 X and subgradient gx , it holds that
f (y)
f (x) + hgx , Expx 1 (y)i + ?2 kExpx 1 (y)k2 .
We call a vector field g : X ! Rd geodesically L-Lipschitz (L-g-Lipschitz) if for any x, y 2 X ,
kg(x)
x
y g(y)k
? LkExpx 1 (y)k,
where xy is the parallel transport from y to x. We call a differentiable function f : X ! R
geodesically L-smooth (L-g-smooth) if its gradient is L-g-Lipschitz, in which case we have
f (y) ? f (x) + hgx , Expx 1 (y)i +
1
2
L
2 kExpx (y)k .
We say f : X ! R is ? -gradient dominated if x? is a global minimizer of f and for every x 2 X
f (x)
f (x? ) ? ? krf (x)k2 .
(2)
We recall the following trigonometric distance bound that is essential for our analysis:
Lemma 1 ([7; 38]). If a, b, c are the side lengths of a geodesic triangle in a Riemannian manifold
with sectional curvature lower bounded by ?min , and A is the angle between sides b and c (defined
through inverse exponential map and inner product in tangent space), then
p
|?min |c
2
p
a ?
b2 + c2 2bc cos(A).
(3)
tanh( |?min |c)
An Incremental First-order Oracle (IFO) [2] in (1) takes an i 2 [n] and a point x 2 X , and returns a
pair (fi (x), rfi (x)) 2 R ? Tx M. We measure non-asymptotic complexity in terms of IFO calls.
3
Riemannian SVRG
In this section we introduce R SVRG formally. We make the following standing assumptions: (a) f
attains its optimum at x? 2 X ; (b) X is compact, and the diameter of X is bounded by D, that is,
maxx,y2X d(x, y) ? D; (c) the sectional curvature in X is upper bounded by ?max , and within X
the exponential map is invertible; and (d) the sectional curvature in X is lower bounded by ?min . We
define the following key geometric constant that capture the impact of manifold curvature:
p
(
|?min |D
p
, if ?min < 0,
?=
(4)
tanh( |?min |D)
1,
if ?min 0,
We note that most (if not all) practical manifold optimization problems can satisfy these assumptions.
Our proposed R SVRG algorithm is shown in Algorithm 1. Compared with the Euclidean SVRG, it
differs in two key aspects: the variance reduction step uses parallel transport to combine gradients
from different tangent spaces; and the exponential map is used (instead of the update xs+1
?vts+1 ).
t
3.1
Convergence analysis for strongly g-convex functions
In this section, we analyze global complexity of R SVRG for solving (1), where each fi (i 2 [n]) is
g-smooth and f is strongly g-convex. In this case, we show that R SVRG has linear convergence rate.
This is in contrast with the O(1/t) rate of Riemannian stochastic gradient algorithm for strongly
g-convex functions [38].
Theorem 1. Assume in (1) each fi is L-g-smooth, and f is ?-strongly g-convex, then if we run
Algorithm 1 with Option I and parameters that satisfy
3??L2
(1 + 4?? 2 2??)m (? 5??L2 )
+
<1
2
? 2??L
? 2??L2
then with S outer loops, the Riemannian SVRG algorithm produces an iterate xa that satisfies
?=
Ed2 (xa , x? ) ? ?S d2 (x0 , x? ).
4
Algorithm 1: R SVRG (x0 , m, ?, S)
Parameters: update frequency m, learning rate ?, number of epochs S
initialize x
? 0 = x0 ;
for s = 0, 1, . . . , S 1 do
xs+1
=x
?s ;
0
P
s+1
g
= n1 n
xs );
i=1 rfi (?
for t = 0, 1, . . . , m 1 do
Randomly pick it 2 {1, . . . , n};
xs+1
t
vts+1 = rfit (xs+1
)
rfit (?
xs )
t
x
?s
s+1
s+1
xt+1 = Expxs+1
?vt
;
end
Set x
?s+1 = xs+1
m ;
g s+1 ;
t
end
Option I: output xa = x
?S ;
1 S 1
Option II: output xa chosen uniformly randomly from {{xs+1
}m
t
t=0 }s=0 .
The proof of Theorem 1 is in the appendix, and takes a different route compared with the original
SVRG proof [18]. Specifically, due to the nonlinear Riemannian metric, we are not able to bound
the squared norm of the variance reduced gradient by f (x) f (x? ). Instead, we bound this quantity
by the squared distances to the minimizer, and show linear convergence of the iterates. A bound
on E[f (x) f (x? )] is then implied by L-g-smoothness, albeit with a stronger dependence on
the condition number. Theorem 1 leads to the following more digestible corollary on the global
complexity of the algorithm:
Corollary
1. With ?assumptions as in Theorem 1 and properly chosen parameters, after
?
2
1
O (n + ?L
?2 ) log( ? ) IFO calls, the output xa satisfies
E[f (xa )
f (x? )] ? ?.
We give a proof with specific parameter choices in the appendix. Observe the dependence on ? in our
result: for ?min < 0, we have ? > 1, which implies that negative space curvature adversarially affects
convergence rate; while for ?min 0, we have ? = 1, which implies that for nonnegatively curved
manifolds, the impact of curvature is not explicit. In the rest of our analysis we will see a similar
effect of sectional curvature; this phenomenon seems innate to manifold optimization (also see [38]).
In the analysis we do not assume each fi to be g-convex, which resulted in a worse dependence on
the condition number. We note that a similar result was obtained in linear space [12]. However, we
will see in the next section that by generalizing the analysis for gradient dominated functions in [25],
we are able to greatly improve this dependence.
3.2
Convergence analysis for geodesically nonconvex functions
In this section, we analyze global complexity of R SVRG for solving (1), where each fi is only required
to be L-g-smooth, and neither fi nor f need be g-convex. We measure convergence to a stationary
point using krf (x)k2 following [13]. Note, however, that here rf (x) 2 Tx M and krf (x)k is
defined via the inner product in Tx M. We first note that Riemannian-SGD on nonconvex L-g-smooth
problems attains O(1/?2 ) convergence as SGD [13] holds; we relegate the details to the appendix.
Recently, two groups independently proved that variance reduction also benefits stochastic gradient
methods for nonconvex smooth finite-sum optimization problems, with different analysis [3; 25]. Our
analysis for nonconvex R SVRG is inspired by [25]. Our main result for this section is Theorem 2.
Theorem 2. Assume in (1) each fi is L-g-smooth, the sectional curvature in X is lower bounded by
?min , and we run Algorithm 1 with Option II. Then there exist universal constants ?0 2 (0, 1), ? > 0
such that if we set ? = ?0 /(Ln?1 ? ?2 ) (0 < ?1 ? 1 and 0 ? ?2 ? 2), m = bn3?1 /2 /(3?0 ? 1 2?2 )c
and T = mS, we have
E[krf (xa )k2 ] ?
Ln?1 ? ?2 [f (x0 ) f (x? )]
,
T?
where x? is an optimal solution to (1).
5
Algorithm 2: GD-SVRG(x0 , m, ?, S, K)
Parameters: update frequency m, learning rate ?, number of epochs S, K, x0
for k = 0, . . . , K 1 do
xk+1 = R SVRG(xk , m, ?, S) with Option II;
end
Output: xK
The key challenge in proving Theorem 2 in the Riemannian setting is to incorporate the impact of
using a nonlinear metric. Similar to the g-convex case, the nonlienar metric impacts the convergence,
notably through the constant ? that depends on a lower-bound on sectional curvature.
Reddi et al. [25] suggested setting ?1 = 2/3, in which case we obtain the following corollary.
Corollary 2. With assumptions and parameters in Theorem 2, choosing ?1 = 2/3, the IFO complexity for achieving an ?-accurate solution is:
?
O n + (n2/3 ? 1 ?2 /?) ,
if ?2 ? 1/2,
IFO calls =
O n? 2?2 1 + (n2/3 ? ?2 /?) , if ?2 > 1/2.
Setting ?2 = 1/2 in Corollary 2 immediately leads to Corollary 3:
Corollary 3. With assumptions in Theorem 2 and ?1 = 2/3, ?2 = 1/2, the IFO complexity for
achieving an ?-accurate solution is O n + (n2/3 ? 1/2 /?) .
The same reasoning allows us to also capture the class of gradient dominated functions (2), for which
Reddi et al. [25] proved that SVRG converges linearly to a global optimum. We have the following
corresponding theorem for R SVRG:
Theorem 3. Suppose that in addition to the assumptions in Theorem 2, f is ? -gradient dominated.
Then there exist universal constants ?0 2 (0, 1), ? > 0 such that if we run Algorithm 2 with
1/2
0
? = ?0 /(Ln2/3 ? 1/2 ), m = bn/(3?0 )c, S = d(6 + 18?
?0 /(?n1/3 )e, we have
n 3 )L? ?
E[krf (xK )k2 ] ? 2
E[f (xK )
f (x? )] ? 2
K
K
krf (x0 )k2 ,
[f (x0 )
f (x? )].
We summarize the implication of Theorem 3 as follows (note the dependence on curvature):
Corollary 4. With Algorithm 2 and the parameters in Theorem 3, the IFO complexity to compute an
?-accurate solution for a gradient dominated function f is O((n + L? ? 1/2 n2/3 ) log(1/?)).
A typical example of gradient dominated function is a strongly g-convex function (see appendix).
Specifically, we have the following corollary, which prove linear convergence rate of R SVRG with
the same assumptions as in Theorem 1, improving the dependence on the condition number.
Corollary 5. With Algorithm 2 and the parameters in Theorem 3, the IFO complexity to compute an
?-accurate solution for a ?-strongly g-convex function f is O((n + ? 1 L? 1/2 n2/3 ) log(1/?)).
4
4.1
Applications
Computing the leading eigenvector
In this section, we apply our analysis of R SVRG for gradient dominated functions (Theorem 3) to fast
eigenvector computation, a fundamental problem that is still being actively researched in the big-data
setting [12; 17; 29]. For the problem of computing the leading eigenvector, i.e.,
?X n
?
min
x>
zi zi> x ,
x> Ax = f (x),
(5)
x> x=1
i=1
existing analyses for state-of-the-art algorithms typically result in O(1/ 2 ) dependence on the
eigengap of A, as opposed to the conjectured O(1/ ) dependence [29], as well as the O(1/ )
dependence of power iteration. Here we give new support for the O(1/ ) conjecture. Note that
Problem (5) seen as one in Rd is nonconvex, with negative semidefinite Hessian everywhere, and has
nonlinear constraints. However, we show that on the hypersphere Sd 1 Problem (5) is unconstrained,
and has gradient dominated objective. In particular we have the following result:
6
0
Theorem 4. Suppose A has eigenvalues 1 > 2 ? ? ?
= 1
d and
2 , and x is drawn
0
uniformly randomly on the hypersphere. Then with probability 1 p, x falls in a Riemannian ball
of a global optimum of the objective function, within which the objective function is O( pd2 )-gradient
dominated.
We provide the proof of Theorem 4 in appendix. Theorem 4 gives new insights for why the conjecture
might be true ? once it is shown that with a constant stepsize and with high probability (both
independent of ) the iterates remain in such a Riemannian ball, applying Corollary 4 one can
immediately prove the O(1/ ) dependence conjecture. We leave this analysis as future work.
Next we show that variance reduced PCA (VR-PCA) [29] is closely related to R SVRG. We implement
Riemannian SVRG for PCA, and use the code for VR-PCA in [29]. Analytic forms for exponential
map and parallel transport on hypersphere can be found in [1, Example 5.4.1; Example 8.1.1]. We
conduct well-controlled experiments comparing the performance of two algorithms. Specifically,
to investigate the dependence of convergence on , for each = 10 3 /k where k = 1, . . . , 25, we
generate a d ? n matrix Z = (z1 , . . . , zn ) where d = 103 , n = 104 using the method Z = U DV >
where U, V are orthonormal matrices and D is a diagonal matrix, as described in [29]. Note that A
has the same eigenvalues as D2 . All the data matrices share the same U, V and only differ in (thus
also in D). We also fix the same random initialization x0 and random seed. We run both algorithms
on each matrix for 50 epochs. For every five epochs, we estimate the number of epochs required to
double its accuracy 2 . This number can serve as an indicator of the global complexity of the algorithm.
We plot this number for different epochs against 1/ , shown in Figure 2. Note that the performance
of RSVRG and VR-PCA with the same stepsize is very similar, which implies a close connection
x+v
of the two. Indeed, the update kx+vk
used in [29] and others is a well-known approximation to the
exponential map Expx (v) with small stepsize (a.k.a. retraction). Also note the complexity of both
algorithms seems to have an asymptotically linear dependence on 1/ .
#epochs required
accuracy
10 -4
10 -6
10
-8
0
2
4
RSVRG
100
RSVRG
VR-PCA
6
1-5
11-15
21-25
31-35
41-45
50
0
0
1
2
1//
#IFO calls #10 5
VR-PCA
100
3
#10
4
#epochs required
/ = 1e-3
10 -2
1-5
11-15
21-25
31-35
41-45
50
0
0
1
2
1//
3
#10
4
Figure 2: Computing the leading eigenvector. Left: RSVRG and VR-PCA are indistinguishable in terms of
IFO complexity. Middle and right: Complexity appears to depend on 1/ . x-axis shows the inverse of eigengap
, y-axis shows the estimated number of epochs required to double the accuracy. Lines represent different epoch
index. All variables are controlled except for .
4.2
Computing the Riemannian centroid
In this subsection we validate that R SVRG converges linearly for averaging PSD matrices under
the Riemannian metric. The problem
for finding the Riemannian centroid of a set of o
PSD matrices
n
Pn
n
?
n
1/2
1/2 2
{Ai }i=1 is X = arg minX?0 f (X; {Ai }i=1 ) , i=1 k log(X
Ai X
)kF where X is
also a PSD matrix. This is a geodesically strongly convex problem, yet nonconvex in Euclidean space.
It has been studied both in matrix computation and in various applications [5; 16]. We use the same
experiment setting as described in [38] 3 , and compare R SVRG against Riemannian full gradient
(RGD) and stochastic gradient (RSGD) algorithms (Figure 3). Other methods for this problem include
the relaxed Richardson iteration algorithm [6], the approximated joint diagonalization algorithm [9],
and Riemannian Newton and quasi-Newton type methods, notably the limited-memory Riemannian
2
?
f (x )
Accuracy is measured by f (x)
, i.e. the relative error between the objective value and the optimum.
|f (x? )|
We measure how much the error has been reduced after each five epochs, which is a multiplicative factor c < 1
on the error at the start of each five epochs. Then we use log(2)/ log(1/c) ? 5 as the estimate, assuming c stays
constant.
3
We generate 100 ? 100 random PSD matrices using the Matrix Mean Toolbox [6] with normalization so
that the norm of each matrix equals 1.
7
BFGS [37]. However, none of these methods were shown to greatly outperform RGD, especially in
data science applications where n is large and extremely small optimization error is not required.
10
0
N=100,Q=1e2
RGD
RSGD
RSVRG
10
-5
10
5
10
0
RGD
RSGD
RSVRG
10
0
1000
2000
#IFO calls
N=100,Q=1e8
-5
10
5
10
0
RGD
RSGD
RSVRG
10
0
1000
2000
N=1000,Q=1e2
-5
10
5
10
0
5000
#IFO calls
10000
N=1000,Q=1e8
RGD
RSGD
RSVRG
10
0
#IFO calls
accuracy
5
accuracy
10
accuracy
accuracy
Note that the objective is sum of squared Riemannian distances in a nonpositively curved space,
thus is (2n)-strongly g-convex and (2n?)-g-smooth. According to the proof of Corollary 1 (see
appendix) the optimal stepsize for R SVRG is O(1/(? 3 n)). For all the experiments, we initialize all
1
the algorithms using the arithmetic mean of the matrices. We set ? = 100n
, and choose m = n in
Algorithm 1 for R SVRG, and use suggested parameters from [38] for other algorithms. The results
suggest R SVRG has clear advantage in the large scale setting.
-5
0
5000
10000
#IFO calls
Figure 3: Riemannian mean of PSD matrices. N : number of matrices, Q: conditional number of each
matrix. x-axis shows the actual number of IFO calls, y-axis show f (X) f (X ? ) in log scale. Lines show the
performance of different algorithms in colors. Note that R SVRG achieves linear convergence and is especially
advantageous for large dataset.
5
Discussion
We introduce Riemannian SVRG, the first variance reduced stochastic gradient algorithm for Riemannian optimization. In addition, we analyze its global complexity for optimizing geodesically strongly
convex, convex, and nonconvex functions, explicitly showing their dependence on sectional curvature.
Our experiments validate our analysis that Riemannian SVRG is much faster than full gradient and
stochastic gradient methods for solving finite-sum optimization problems on Riemannian manifolds.
Our analysis of computing the leading eigenvector as a Riemannian optimization problem is also
worth noting: a nonconvex problem with nonpositive Hessian and nonlinear constraints in the ambient
space turns out to be gradient dominated on the manifold. We believe this shows the promise of
theoretical study of Riemannian optimization, and geometric optimization in general, and we hope it
encourages other researchers in the community to join this endeavor.
Our work also has limitations ? most practical Riemannian optimization algorithms use retraction
and vector transport to efficiently approximate the exponential map and parallel transport, which we
do not analyze in this work. A systematic study of retraction and vector transport is an important
topic for future research. For other applications of Riemannian optimization such as low-rank matrix
completion [34], covariance matrix estimation [35] and subspace tracking [11], we believe it would
also be promising to apply fast incremental gradient algorithms in the large scale setting.
Acknowledgment: SS acknowledges support of NSF grant: IIS-1409802. HZ acknowledges support
from the Leventhal Fellowship.
References
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds. Princeton
University Press, 2009.
[2] A. Agarwal and L. Bottou. A lower bound for the optimization of finite sums. In Proceedings of the 32nd
International Conference on Machine Learning (ICML-15), pages 78?86, 2015.
[3] Z. Allen-Zhu and E. Hazan. Variance reduction for faster non-convex optimization. arXiv:1603.05643,
2016.
[4] F. Bach and E. Moulines. Non-strongly-convex smooth stochastic approximation with convergence rate o
(1/n). In Advances in Neural Information Processing Systems, pages 773?781, 2013.
[5] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[6] D. A. Bini and B. Iannazzo. Computing the karcher mean of symmetric positive definite matrices. Linear
Algebra and its Applications, 438(4):1700?1710, 2013.
[7] S. Bonnabel. Stochastic gradient descent on Riemannian manifolds. Automatic Control, IEEE Transactions
on, 58(9):2217?2229, 2013.
8
[8] A. Cherian and S. Sra. Riemannian dictionary learning and sparse coding for positive definite matrices.
arXiv:1507.02772, 2015.
[9] M. Congedo, B. Afsari, A. Barachant, and M. Moakher. Approximate joint diagonalization and geometric
mean of symmetric positive definite matrices. PloS one, 10(4):e0121423, 2015.
[10] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for
non-strongly convex composite objectives. In NIPS, pages 1646?1654, 2014.
[11] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints.
SIAM journal on Matrix Analysis and Applications, 20(2):303?353, 1998.
[12] D. Garber and E. Hazan. Fast and simple pca via convex optimization. arXiv preprint arXiv:1509.05647,
2015.
[13] S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming.
SIAM Journal on Optimization, 23(4):2341?2368, 2013.
[14] P. Gong and J. Ye. Linear convergence of variance-reduced stochastic gradient without strong convexity.
arXiv preprint arXiv:1406.1102, 2014.
[15] R. Hosseini and S. Sra. Matrix manifold optimization for Gaussian mixtures. In NIPS, 2015.
[16] B. Jeuris, R. Vandebril, and B. Vandereycken. A survey and comparison of contemporary algorithms for
computing the matrix geometric mean. Electronic Transactions on Numerical Analysis, 39:379?402, 2012.
[17] C. Jin, S. M. Kakade, C. Musco, P. Netrapalli, and A. Sidford. Robust shift-and-invert preconditioning:
Faster and more sample efficient algorithms for eigenvector computation. arXiv:1510.08896, 2015.
[18] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In
Advances in Neural Information Processing Systems, pages 315?323, 2013.
[19] H. Kasai, H. Sato, and B. Mishra. Riemannian stochastic variance reduced gradient on grassmann manifold.
arXiv preprint arXiv:1605.07367, 2016.
[20] J. Kone?cn`y and P. Richt?rik. Semi-stochastic gradient descent methods. arXiv:1312.1666, 2013.
[21] X. Liu, A. Srivastava, and K. Gallivan. Optimal linear representations of images for object recognition.
IEEE TPAMI, 26(5):662?666, 2004.
[22] M. Moakher. Means and averaging in the group of rotations. SIAM journal on matrix analysis and
applications, 24(1):1?16, 2002.
[23] E. Oja. Principal components, minor components, and linear neural networks. Neural Networks, 5(6):
927?935, 1992.
[24] P. Petersen. Riemannian geometry, volume 171. Springer Science & Business Media, 2006.
[25] S. J. Reddi, A. Hefny, S. Sra, B. P?cz?s, and A. Smola. Stochastic variance reduction for nonconvex
optimization. arXiv:1603.06160, 2016.
[26] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:
400?407, 1951.
[27] R. Y. Rubinstein and D. P. Kroese. Simulation and the Monte Carlo method, volume 707. John Wiley &
Sons, 2011.
[28] M. Schmidt, N. L. Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient.
arXiv:1309.2388, 2013.
[29] O. Shamir. A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate. In International
Conference on Machine Learning (ICML-15), pages 144?152, 2015.
[30] S. Sra and R. Hosseini. Geometric optimisation on positive definite matrices for elliptically contoured
distributions. In Advances in Neural Information Processing Systems, pages 2562?2570, 2013.
[31] J. Sun, Q. Qu, and J. Wright. Complete dictionary recovery over the sphere ii: Recovery by riemannian
trust-region method. arXiv:1511.04777, 2015.
[32] M. Tan, I. W. Tsang, L. Wang, B. Vandereycken, and S. J. Pan. Riemannian pursuit for big matrix recovery.
In International Conference on Machine Learning (ICML-14), pages 1539?1547, 2014.
[33] C. Udriste. Convex functions and optimization methods on Riemannian manifolds, volume 297. Springer
Science & Business Media, 1994.
[34] B. Vandereycken. Low-rank matrix completion by Riemannian optimization. SIAM Journal on Optimization, 23(2):1214?1236, 2013.
[35] A. Wiesel. Geodesic convexity and covariance estimation. IEEE Transactions on Signal Processing, 60
(12):6182?6189, 2012.
[36] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM
Journal on Optimization, 24(4):2057?2075, 2014.
[37] X. Yuan, W. Huang, P.-A. Absil, and K. Gallivan. A riemannian limited-memory bfgs algorithm for
computing the matrix geometric mean. Procedia Computer Science, 80:2147?2157, 2016.
[38] H. Zhang and S. Sra. First-order methods for geodesically convex optimization. arXiv:1602.06053, 2016.
[39] T. Zhang, A. Wiesel, and M. S. Greco. Multivariate generalized Gaussian distribution: Convexity and
graphical models. Signal Processing, IEEE Transactions on, 61(16):4141?4148, 2013.
9
| 6515 |@word briefly:1 version:1 middle:1 norm:4 trigonometry:1 stronger:1 seems:2 advantageous:1 nd:1 wiesel:2 hu:2 d2:2 simulation:2 bn:1 covariance:4 pick:1 sgd:2 sepulchre:1 reduction:11 liu:1 cherian:1 offering:1 bc:1 existing:2 mishra:1 elliptical:1 comparing:1 yet:1 john:1 numerical:1 analytic:1 plot:1 update:4 stationary:1 xk:5 smith:1 short:2 hypersphere:3 provides:2 characterization:1 iterates:2 gx:4 zhang:7 five:3 mathematical:1 along:1 c2:1 edelman:1 prove:2 yuan:1 combine:1 introduce:4 x0:9 congedo:1 indeed:1 notably:3 ica:1 roughly:1 surge:1 growing:1 nor:1 moulines:1 inspired:2 researched:1 little:1 actual:1 equipped:1 spain:1 discover:1 moreover:1 bounded:6 medium:2 kg:1 substantially:1 eigenvector:8 developed:1 finding:1 guarantee:1 thorough:1 every:3 k2:6 control:2 grant:1 positive:7 before:1 local:2 xv:1 sd:1 path:1 might:1 zeroth:1 initialization:1 studied:1 co:1 limited:5 factorization:1 unique:2 practical:2 acknowledgment:1 definite:7 differs:1 implement:1 foundational:1 empirical:1 universal:2 maxx:1 adapting:1 composite:1 projection:2 suggest:1 petersen:1 onto:1 close:1 mahony:1 risk:1 influence:1 context:1 applying:2 equivalent:1 map:12 nonnegatively:1 ghadimi:1 attention:2 starting:1 independently:1 convex:37 survey:1 musco:1 roux:1 recovery:3 immediately:2 insight:3 orthonormal:1 classic:1 proving:1 analogous:1 annals:1 shamir:1 suppose:2 tan:1 programming:1 us:1 expensive:1 approximated:1 recognition:1 preprint:3 wang:1 capture:2 worst:1 tsang:1 region:1 richness:1 sun:1 plo:1 richt:1 e8:2 contemporary:1 convexity:3 complexity:19 geodesic:10 depend:1 solving:6 algebra:1 predictive:1 serve:1 triangle:1 preconditioning:1 accelerate:1 joint:2 various:2 tx:15 fast:8 effective:1 monte:2 rubinstein:1 bhatia:1 choosing:1 quite:1 garber:1 posed:1 widely:1 say:1 s:1 statistic:1 richardson:1 moakher:2 advantage:2 differentiable:2 eigenvalue:2 tpami:1 propose:1 product:6 loop:1 translate:1 trigonometric:1 validate:2 convergence:29 double:2 optimum:4 y2x:1 produce:1 incremental:3 converges:2 leave:1 object:1 help:1 depending:1 illustrate:1 completion:2 gong:1 pose:1 measured:1 minor:1 received:1 progress:1 strong:1 netrapalli:1 implies:3 differ:1 direction:1 closely:1 stochastic:34 viewing:1 transparent:2 fix:1 preliminary:1 repertoire:1 hold:3 bonnabel:1 considered:1 wright:1 great:1 seed:1 algorithmic:2 major:1 dictionary:4 achieves:1 estimation:3 tanh:2 robbins:2 tf:1 tool:1 minimization:1 hope:1 mit:2 gaussian:3 pn:1 corollary:12 ax:1 inherits:1 afsari:1 properly:1 vk:1 rank:3 mainly:1 greatly:2 contrast:2 centroid:3 absil:2 geodesically:24 attains:2 typically:1 quasi:1 provably:2 issue:1 among:1 arg:1 development:1 arccos:1 constrained:1 special:2 fairly:1 initialize:2 art:1 field:1 once:1 equal:1 adversarially:1 progressive:1 icml:3 future:2 others:2 randomly:3 oja:2 preserve:1 resulted:1 geometry:7 n1:2 attempt:1 psd:5 huge:1 interest:1 investigate:1 rfit:2 vandereycken:3 mixture:3 analyzed:1 kukkvk:1 semidefinite:1 kone:1 implication:2 accurate:5 ambient:2 xy:1 intense:1 conduct:1 euclidean:6 theoretical:2 witnessed:2 instance:1 modeling:1 sidford:1 karcher:1 zn:1 kasai:1 submanifold:1 johnson:1 proximal:1 gd:1 fundamental:2 international:3 siam:5 stay:1 standing:1 systematic:1 invertible:1 kroese:1 squared:3 opposed:1 choose:1 possibly:1 huang:1 worse:1 derivative:2 leading:7 return:1 actively:1 account:1 bfgs:2 b2:1 coding:1 includes:2 ifo:15 satisfy:2 explicitly:1 vi:2 depends:1 multiplicative:1 view:1 analyze:8 hazan:2 start:1 option:5 parallel:9 monro:2 contribution:2 accuracy:8 variance:22 efficiently:1 yield:1 generalize:1 critically:1 none:1 carlo:2 worth:1 researcher:1 retraction:3 definition:2 against:2 ty:3 frequency:2 e2:2 associated:1 riemannian:76 proof:5 nonpositive:1 spate:1 proved:4 dataset:1 recall:2 knowledge:3 subsection:1 color:1 hefny:1 rgd:6 appears:1 dt:1 formulation:1 though:1 strongly:17 xa:7 smola:1 contoured:1 transport:8 trust:1 nonlinear:7 mode:1 believe:2 hongyi:1 innate:1 building:1 effect:1 ye:1 concept:1 true:1 counterpart:1 symmetric:5 indistinguishable:1 encourages:1 essence:1 whereby:1 m:1 ln2:1 generalized:1 complete:1 allen:1 reasoning:1 image:1 novel:2 fi:11 recently:3 common:1 superior:1 rotation:1 volume:3 mellon:1 refer:1 ai:3 smoothness:1 rd:4 unconstrained:2 automatic:1 curvature:17 multivariate:1 recent:4 perspective:1 optimizing:3 conjectured:1 route:1 certain:1 nonconvex:22 suvrit:1 discussing:1 vt:3 pd2:1 preserving:1 minimum:1 seen:1 relaxed:1 shortest:1 signal:2 ii:5 arithmetic:1 full:4 semi:1 smooth:14 faster:4 offer:1 bach:3 sphere:1 grassmann:2 controlled:2 impact:5 essentially:1 metric:9 optimisation:1 arxiv:14 iteration:3 represent:1 normalization:1 cz:1 agarwal:1 invert:1 addition:2 fellowship:1 x2x:1 rest:1 strict:1 subject:1 hz:1 hgx:3 reddi:5 practitioner:1 call:12 noting:1 iterate:1 affect:1 variate:1 zi:2 inner:6 cn:1 shift:1 motivated:1 pca:13 defazio:1 accelerating:1 eigengap:2 suffer:1 sashank:1 hessian:3 speaking:1 elliptically:1 rfi:2 clear:1 eigenvectors:1 locally:2 induces:1 diameter:1 reduced:12 generate:2 outperform:1 exist:2 nsf:1 estimated:1 carnegie:1 promise:2 group:2 key:5 lan:1 achieving:2 drawn:1 neither:1 krf:6 kuk:1 lacoste:1 asymptotically:1 subgradient:2 sum:11 year:1 run:4 angle:2 inverse:4 everywhere:1 throughout:1 electronic:1 appendix:6 bound:6 oracle:1 sato:1 constraint:5 orthogonality:2 rsvrg:8 dominated:13 aspect:1 speed:1 min:13 span:1 extremely:1 conjecture:3 developing:1 according:1 alternate:1 ball:2 remain:1 son:1 pan:1 appealing:1 kakade:1 qu:1 projecting:1 restricted:2 dv:1 ln:2 previously:1 remains:1 turn:1 end:3 generalizes:1 operation:2 pursuit:1 apply:3 observe:1 stepsize:4 batch:5 alternative:1 schmidt:1 slower:1 original:1 include:1 graphical:1 newton:2 yx:2 bini:1 especially:2 hosseini:2 classical:1 tensor:1 objective:8 implied:1 question:1 quantity:1 greco:1 dependence:13 usual:1 diagonal:1 gradient:43 minx:1 subspace:3 distance:5 mapped:2 ensuing:1 outer:1 topic:1 manifold:36 assuming:1 length:1 code:1 index:1 illustration:1 minimizing:2 expense:1 negative:2 upper:1 finite:10 descent:3 curved:3 jin:1 community:1 cast:1 pair:1 required:6 toolbox:1 z1:1 connection:1 distinction:1 tremendous:1 barcelona:1 nip:3 beyond:1 able:2 suggested:2 usually:1 below:1 challenge:2 summarize:2 rf:1 including:1 max:1 memory:2 gallivan:2 power:2 difficulty:1 rely:1 business:2 indicator:1 zhu:1 improve:1 julien:1 axis:4 acknowledges:2 text:1 review:2 literature:1 geometric:6 tangent:5 l2:3 epoch:12 kf:1 asymptotic:6 relative:1 limitation:1 rik:1 sufficient:1 xiao:1 share:1 expx:8 translation:1 kexpx:3 free:1 svrg:42 side:2 explaining:1 fall:1 sparse:1 benefit:2 curve:1 rich:1 projected:2 transaction:4 approximate:2 compact:1 ed2:1 rsgd:5 global:14 reveals:2 conceptual:1 search:1 optimization1:1 why:1 promising:1 learn:1 transported:2 robust:1 sra:8 improving:1 bottou:1 main:1 linearly:3 big:2 n2:5 join:1 vr:6 wiley:1 explicit:1 saga:1 exponential:11 theorem:21 xt:1 specific:1 leventhal:1 showing:2 x:8 iannazzo:1 essential:1 albeit:1 aria:1 diagonalization:2 kx:1 generalizing:1 relegate:1 sectional:9 tracking:1 applies:1 springer:2 corresponds:1 minimizer:2 satisfies:2 conditional:1 procedia:1 endeavor:1 lipschitz:3 specifically:4 except:2 operates:1 determined:1 uniformly:2 typical:1 averaging:2 principal:2 lemma:1 lens:1 called:2 gauss:1 svd:1 formally:2 support:4 incorporate:1 princeton:2 phenomenon:1 srivastava:1 |
6,098 | 6,516 | Tensor Switching Networks
Chuan-Yung Tsai?, Andrew Saxe?, David Cox
Center for Brain Science, Harvard University, Cambridge, MA 02138
{chuanyungtsai,asaxe,davidcox}@fas.harvard.edu
Abstract
We present a novel neural network algorithm, the Tensor Switching (TS) network,
which generalizes the Rectified Linear Unit (ReLU) nonlinearity to tensor-valued
hidden units. The TS network copies its entire input vector to different locations in
an expanded representation, with the location determined by its hidden unit activity.
In this way, even a simple linear readout from the TS representation can implement
a highly expressive deep-network-like function. The TS network hence avoids the
vanishing gradient problem by construction, at the cost of larger representation size.
We develop several methods to train the TS network, including equivalent kernels
for infinitely wide and deep TS networks, a one-pass linear learning algorithm, and
two backpropagation-inspired representation learning algorithms. Our experimental
results demonstrate that the TS network is indeed more expressive and consistently
learns faster than standard ReLU networks.
1
Introduction
Deep networks [1, 2] continue to post impressive successes in a wide range of tasks, and the Rectified
Linear Unit (ReLU) [3, 4] is arguably the most used simple nonlinearity. In this work we develop a
novel deep learning algorithm, the Tensor Switching (TS) network, which generalizes the ReLU such
that each hidden unit conveys a tensor, instead of scalar, yielding a more expressive model. Like the
ReLU network, the TS network is a linear function of its input, conditioned on the activation pattern
of its hidden units. By separating the decision to activate from the analysis performed when active,
even a linear classifier can reach back across all layers to the input of the TS network, implementing
a deep-network-like function while avoiding the vanishing gradient problem [5], which can otherwise
significantly slow down learning in deep networks. The trade-off is the representation size.
We exploit the properties of TS networks to develop several methods suitable for learning in different
scaling regimes, including their equivalent kernels for SVMs on small to medium datasets, a one-pass
linear learning algorithm which visits each data point only once for use with very large but simpler
datasets, and two backpropagation-inspired representation learning algorithms for more generic use.
Our experimental results show that TS networks are indeed more expressive and consistently learn
faster than standard ReLU networks.
Related work is briefly summarized as follows. With respect to improving the nonlinearities, the idea
of severing activation and analysis weights (or having multiple sets of weights) in each hidden layer
has been studied in [6, 7, 8]. Reordering activation and analysis is proposed by [9]. On tackling the
vanishing gradient problem, tensor methods are used by [10] to train single-hidden-layer networks.
Convex learning and inference in various deep architectures can be found in [11, 12, 13] too. Finally,
conditional linearity of deep ReLU networks is also used by [14], mainly to analyze their performance.
In comparison, the TS network does not simply reorder or sever activation and analysis within each
hidden layer. Instead, it is a cross-layer generalization of these concepts, which can be applied with
most of the recent deep learning architectures [15, 9], not only to increase their expressiveness, but
also to help avoiding the vanishing gradient problem (see Sec. 2.3).
?
Equal contribution.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Scalar
Switching
Input X0
ReLU X1
W1
1
2
1
0
Linear
Readout
y
Tensor
Switching
Input X0
WX
ReLU Z1
W1
1
?1
2
1
3
[0 0]
WZ
?1
[2 3]
5
1
Linear
Readout
y
3
?1
1
1
?1
1
[2 3]
1
Figure 1: (Left) A single-hidden-layer standard (i.e. Scalar Switching) ReLU network. (Right) A
single-hidden-layer Tensor Switching ReLU network, where each hidden unit conveys a vector of
activities?inactive units (top-most unit) convey a vector of zeros while active units (bottom two
units) convey a copy of their input.
2
Tensor Switching Networks
In the following we first construct the definition of shallow (single-hidden-layer) TS networks, then
generalize the definition to deep TS networks, and finally describe their qualitative properties. For
simplicity, we only show fully-connected architectures using the ReLU nonlinearity. However, other
popular nonlinearities, e.g. max pooling and maxout [16], in addition to ReLU, are also supported in
both fully-connected and convolutional architectures.
2.1
Shallow TS Networks
The TS-ReLU network is a generalization of standard ReLU networks that permits each hidden unit
to convey an entire tensor of activity (see Fig. 1). To describe it, we build up from the standard ReLU
network. Consider a ReLU layer with weight matrix W1 ? Rn1 ?n0 responding to an input vector
X0 ? Rn0 . The resulting hidden activity X1 ? Rn1 of this layer is X1 = max (0n1 , W1 X0 ) =
H (W1 X0 ) ? (W1 X0 ) where H is the Heaviside step function, and ? denotes elementwise product.
The rightmost equation splits apart each hidden unit?s decision to activate, represented by the term
H (W1 X0 ), from the information (i.e. result of analysis) it conveys when active, denoted by W1 X0 .
We then go one step further to rewrite X1 as
?
?
X1 = ?H (W1 X0 ) ? X0 W1 ? ? 1n0 ,
|
{z
}
(1)
Z1
where we have made use of the following tensor operations: vector-tensor cross product C = A ?
B =? ci,j,k,... = ai bj,k,... , tensor-matrix Hadamard product C
Pn= A B =? c...,j,i = a...,j,i bj,i
and tensor summative reduction C = A ? 1n =? c...,k,j = i=1 a...,k,j,i . In (1), the input vector
X0 is first expanded into a new matrix representation Z1 ? Rn1 ?n0 with one row per hidden unit. If
a hidden unit is active, the input vector X0 is copied to the corresponding row. Otherwise, the row is
filled with zeros. Finally, this expanded representation Z1 is collapsed back by projection onto W1 .
The central idea behind the TS-ReLU network is to learn a linear classifier directly from the rich,
expanded representation Z1 , rather than collapsing it back to the lower dimensional X1 . That is, in a
standard ReLU network, the hidden layer activity X1 is sent through a linear classifier fX (WX X1 )
trained to minimize some loss function LX (fX ). In the TS-ReLU network, by contrast, the expanded
representation Z1 is sent to a linear classifier fZ (WZ vec (Z1 )) with loss function LZ (fZ ). Each
TS-ReLU neuron thus transmits a vector of activities (a row of Z1 ), compared to a standard ReLU
neuron that transmits a single scalar (see Fig. 1). Because of this difference, in the following we call
the standard ReLU network a Scalar Switching ReLU (SS-ReLU) network.
2.2
Deep TS Networks
The construction given above generalizes readily to deeper networks. Define a nonlinear expansion
operation as X ? W = H (WX) ? X and linear contraction operation as Z W = (Z W) ? 1n ,
such that (1) becomes Xl = ((Xl?1 ? Wl ) Wl ) ? 1nl?1 = Xl?1 ? Wl Wl for a given layer l
2
with Xl ? Rnl and Wl ? Rnl ?nl?1 . A deep SS-ReLU network with L layers may then be expressed
as a sequence of alternating expansion and contraction steps,
XL = X0 ? W1 W1 ? ? ? ? WL WL .
(2)
To obtain the deep TS-ReLU network, we further define the ternary expansion operation Z ?X W =
H (WX) ? Z, such that the decision to activate is based on the SS-ReLU variables X, but the entire
tensor Z is transmitted when the associated hidden unit is active. Let Z0 = X0 . The l-th layer activity
tensor of a TS network can then be written as Zl = H (Wl Xl?1 ) ? Zl?1 = Zl?1 ?Xl?1 Wl ?
Rnl ?nl?1 ?????n0 . Thus compared to a deep SS-ReLU network, a deep TS-ReLU network simply
omits the contraction stages,
ZL = Z0 ?X0 W1 ? ? ? ?XL?1 WL .
(3)
Because there are no contraction steps, the order of Zl ? Rnl ?nl?1 ?????n0 grows with depth, adding
an additional dimension for each layer. One interpretation of this scheme is that, if a hidden unit
at layer l is active, the entire tensor Zl?1 is copied to the appropriate position in Zl .1 Otherwise a
tensor of zeros is copied. Another equivalent interpretation is that the input vector X0 is copied to a
given position Zl (i, j, . . . , k, :) only if hidden units i, j, . . . , k at layers l, l ? 1, . . . , 1 respectively
are all active. Otherwise, Zl (i, j, . . . , k, :) = 0n0 . Hence activity propagation in the deep TS-ReLU
network preserves the layered structure of a deep SS-ReLU network, in which a chain of hidden units
across layers must activate for activity to propagate from input to output.
2.3
Properties
The TS network decouples a hidden unit?s decision to activate (as encoded by the activation weights
{Wl }) from the analysis performed on the input when the unit is active (as encoded by the analysis
weights WZ ). This distinguishing feature leads to the following 3 properties.
Cross-layer analysis. Since the TS representation preserves the layered structure of a deep network
and offers direct access to the entire input (parcellated by the activated hidden units), a simple linear
readout can effectively reach back across layers to the input and thus implicitly learns analysis weights
for all layers at one time in WZ . Therefore it avoids the vanishing gradient problem by construction.2
Error-correcting analysis. As activation and analysis are severed, a careful selection of the analysis
weights can ?clean up? a certain amount of inexactitude in the choice to activate, e.g. from noisy or
even random activation weights. While for the SS network, bad activation also implies bad analysis.
Fine-grained analysis. To see this, we consider single-hidden-layer TS and SS networks with just
one hidden unit. The TS unit, when active, conveys the entire input vector, and hence any full-rank
linear map from input to output may be implemented. The SS unit, when active, conveys just a single
scalar, and hence can only implement a rank-1 linear map between input and output. By choosing the
right analysis weights, a TS network can always implement an SS network,3 but not vice versa. As
such, it clearly has greater modeling capacity for a fixed number of hidden units.
Although the TS representation is highly expressive,
it comes at the cost of an exponential increase in
Q
the size of its representation with depth, i.e. l nl . This renders TS networks of substantial width
and depth very challenging (except as kernels). But as we will show, the expressiveness permits TS
networks to perform fairly well without having to be extremely wide and deep, and often noticeably
better than SS networks of the same sizes. Also, TS networks of useful sizes still can be implemented
with reasonable computing resources, especially when combined with techniques in Sec. 4.3.
3
Equivalent Kernels
In this section we derive equivalent kernels for TS-ReLU networks with arbitrary depth and an infinite
number of hidden units at each layer, with the aim of providing theoretical insight into how TS-ReLU
is analytically different from SS-ReLU. These kernels represent the extreme of infinite (but unlearned)
features, and might be used in SVM on datasets of small to medium sizes.
1
For convolutional networks using max pooling, the convolutional-window-sized input patch winning the
max pooling is copied. In other words, different nonlinearities only change the way the input is switched.
2
It is in spirit similar to models with skip connections to the output [17, 18], although not exactly reducible.
3
Therefore TS networks are also universal function approximators [19].
3
1
Figure 2: Equivalent kernels as a function of
the angle between unit-length vectors x and
y. The deep SS-ReLU kernel converges to 1
everywhere as L ? ?, while the deep TSReLU kernel converges to 1 at the origin and
0 everywhere else.
k
0.5
0
Linear
SS L=1
SS L=2
SS L=3
TS L=1
TS L=2
TS L=3
-0.5
-1
0
0.5?
?
?
Consider a single-hidden-layer TS-ReLU network with n1 hidden units in which each element of
the activation weight matrix W1 ? Rn1 ?n0 is i.i.d. zero mean Gaussian with arbitrary standard
n0
deviation ?. The infinite-width random TS-ReLU kernel between
p two vectors x, y ? R is the
dot product between their expanded representations (scaled by 2/n1for convenience)
in the limit
p
|
p
TS
of infinite hidden units, k1 (x, y) = limn1 ?? vec
2/n1 x ? W1 vec
2/n1 y ? W1 =
2 E [H (w| x) H (w| y)] x| y, where w ? N 0, ? 2 I is a n0 -dimensional random Gaussian vector.
The expectation is the probability that a randomly chosen vector w lies within 90 degrees of both x
and y. Because w is drawn from an isotropic Gaussian, if x and y differ by an angle ?, then only the
fraction ???
2? of randomly drawn w will be within 90 degrees of both, yielding the equivalent kernel
of a single-hidden-layer infinite-width random TS-ReLU network given in (5).4
tan ? ? ?
SS
SS
|
?
k1 (x, y) = k (?) x y = 1 ?
x| y
(4)
?
?
x| y
(5)
k1TS (x, y) = k?TS (?) x| y = 1 ?
?
Figure 2 compares (5) against the linear kernel and the single-hidden-layer infinite-width random
SS-ReLU kernel (4) from [20] (see Linear, TS L = 1 and SS L = 1). It has two important qualitative
features. First, it has discontinuous derivative at ? = 0, and hence a much sharper peak than the other
kernels.5 Intuitively this means that a very close match counts for much more than a moderately close
match. Second, unlike the SS-ReLU kernel which is non-negative everywhere, the TS-ReLU kernel
still has a negative lobe, though it is substantially reduced relative to the linear kernel. Intuitively this
means that being dissimilar to a support vector can provide evidence against a particular classification,
but this negative evidence is much weaker than in a standard linear kernel.
To derive kernels for deeper TS-ReLU networks, we need to consider the deeper SS-ReLU kernels as
well, since its activation and analysis are severed, and the activation instead depends on its SS-ReLU
counterpart. Based upon the recursive formulation from [20], first we
pdefine the zeroth-layer
kernel
k0? (x, y) = x| y and the generalized angle ?l? = cos?1 kl? (x, y)/ kl? (x, x) kl? (y, y) , where ?
SS
TS
denotes SS or TS. Then we can easily get kl+1
(x, y) = k?SS ?lSS klSS (x, y),6 and kl+1
(x, y) =
TS
TS
SS
?
k?
?l kl (x, y), where k? follows (4) or (5) accordingly.
Figure 2 also plots the deep TS-ReLU and SS-ReLU kernels as a function of depth. The shape of
these kernels reveals sharply divergent behavior between the TS and SS networks. As depth increases,
the equivalent kernel of the TS network falls off ever more rapidly as the angle between input vectors
increases. This means that vectors must be an ever closer match to retain a high kernel value. As
argued earlier, this highlights the ability of the TS network to pick up on and amplify small differences
between inputs, resulting in a quasi-nearest-neighbor behavior. In contrast, the equivalent kernel of
the SS network limits to one as depth increases. Thus, rather than amplifying small differences, it
collapses them with depth such that even very dissimilar vectors receive high kernel values.
4
This proof is succinct using a geometric view, while a longer proof can be found in the Supplementary
Material. As the kernel is directly defined as a dot product between feature vectors, it is naturally a valid kernel.
5
Interestingly, a similar kernel is also observed by [21] for models with explicit skip connections.
6
We write (4) and klSS differently from [20] for cleaner comparisons against TS-ReLU kernels. However they
are numerically unstable expressions and are not used in our experiments to replace the original ones in [20].
4
X0
Z0
?W1 W1 ????WL WL
Scalar Switching
?W1 ????WL
Tensor Switching
LZ
XL
ZL = A0
W1 ??? WL
Auxiliary
AL
Figure 3: Inverted backpropagation learning flowchart, where ? denotes signal flow, 99K denotes
pseudo gradient flow, and = denotes equivalence. (Top row) The SS pathway. (Bottom row) The
TS and auxiliary pathways, where Zl ?s are related by nonlinear expansions, and Al ?s are related by
linear contractions. The resulting AL is equivalent to the alternating expansion and contraction in the
SS pathway that yields XL .
4
Learning Algorithms
In the following we present 3 learning algorithms suitable for different scenarios. One-pass ridge
regression in Sec. 4.1 learns only the linear readout (i.e. analysis weights WZ ), leaving the hiddenlayer representations (i.e. activation weights {Wl }) random, hence it is convex and exactly solvable.
Inverted backpropagation in Sec. 4.2 learns both analysis and activation weights. Linear RotationCompression in Sec. 4.3 also learns both weights, but learns activation weights in an indirect way.
4.1
Linear Readout Learning via One-pass Ridge Regression
In this scheme, we leverage the intuition that precision in the decision for a hidden unit to activate is
less important than carefully tuned analysis weights, which can in part compensate for poorly tuned
activation weights. We randomly draw and fix the activation weights {Wl }, and then solve for the
analysis weights WZ using ridge regression, which can be done in a single pass through the dataset.
p
First, each data point p = 1, . . . , P is expanded
into its tensor representation ZP
L and then accumulated
P
|
p
p |
into the correlation matrices CZZ = p vec (ZL ) vec (ZL ) and CyZ = p y p vec (ZpL ) . After
?1
all data points are processed once, the analysis weights are determined as WZ = CyZ (CZZ + ?I)
where ? is an L2 regularization parameter.
Unlike a standard SS network, which in this setting would only be able to select a linear readout from
the top hidden layer to the final classification decision, the TS network offers direct access to entire
input vectors, parcellated by the hidden units they activate. In this way, even a linear readout can
effectively reach back across layers to the input, implementing a complex function not representable
with an SS network
with random
Q
filters. However, this scheme requires high memory usage, which is
L
2
7
on the order of O
l=0 nl for storing CZZ , and even higher computation cost for solving WZ ,
which makes deep architectures (i.e. L > 1) impractical. Therefore, this scheme may best suit online
learning applications which allow only one-time access to data, but do not require a deep classifier.
4.2
Representation Learning via Inverted Backpropagation
The ridge regression learning uses random activation weights and only learns analysis weights. Here
we provide a ?gradient-based? procedure to learn both weights. Learning the analysis weights (i.e. the
?LZ
final linear layer) WZ simply requires ?W
, which is generally easy to compute. However, since the
Z
activation weights Wl in the TS network only appear inside the Heaviside step function H with zero
?LZ
(or undefined) derivative, the gradient ?W
is also zero. To bypass this, we introduce a sequence of
l
auxiliary variables Al defined by A0 = ZL and the recursion Al = Al?1 Wl ? RnL ?nL?1 ?????nl .
We then derive the pseudo gradient using the proposed inverted backpropagation as
?
?
dZ
?L
?LZ ?A1
?Al
?Al
=
???
,
(6)
?Wl
?A0 ?A0
?Al?1
?Wl
where ? denotes Moore?Penrose pseudoinverse. Because the Al ?s are related via the linear contraction
operator, these derivatives are non-zero and easy to compute. We find this works sufficiently well as
?LZ
a non-zero proxy for ?W
.
l
7
Nonetheless this is a one-time cost and still can be advantageous over other slowly converging algorithms.
5
Our motivation with this scheme is to ?recover? the learning behavior in SS networks. To see this,
first note that AL = A0 W1 ? ? ? WL = XL (see Fig. 3). This reflects the fact that the TS and
SS networks are linear once the active set of hidden units is known, such that the order of expansion
and contraction steps has no effect on the final output. Hence the linear contraction steps, which
alternate with expansion steps in (3), can instead be gathered at the end after all expansion steps. The
gradient in the SS network is then
?
?
?LX
?LX ?AL
?Al+1 ?Al
?LX ?AL
?A1 ?A1
?Al
?Al
=
???
=
???
???
.
?Wl
?AL ?AL?1
?Al ?Wl
?AL ?AL?1
?A0 ?A0
?Al?1
?Wl
|
{z
}
?LX
(7)
?A0
?LZ
X
Replacing ?L
?A0 in (7) with ?A0 , such that the expanded representation may influence the inverted
gradient, we recover (6). Q
Compared to one-pass ridge regression, this scheme controls the memory and
time complexities at O ( l nl ), which makes training of a moderately-sized TS network on modern
computing resources feasible. The ability to train activation weights also relaxes the assumption that
analysis weights can ?clean up? inexact activations caused by using even random weights.
4.3
Indirect Representation Learning via Linear Rotation-Compression
Although the inverted backpropagation learning controls memory and time complexities better than
the one-pass ridge regression, the exponential growth of a TS network?s representation still severely
constrains its potential toward being applied in recent deep learning architectures, where network
width and depth can easily go beyond, e.g., a thousand. In addition, the success of recent deep learning
architectures also heavily depends on the acceleration provided by highly-optimized GPU-enabled
libraries, where the operations of the previous learning schemes are mostly unsupported.
To address these 2 concerns, we provide a standard backpropagation-compatible learning algorithm,
where we no longer keep separate X and Z variables. Instead we define Xl = Wl? vec (Xl?1 ? Wl ),
?
which directly flattens the expanded representation and linearly projects it against Wl? ? Rnl ?nl nl?1 .
?
In this scheme, even though Wl still lacks a non-zero gradient, the Wl?1 of the previous layer can
be learned using backpropagation to properly ?rotate? Xl?1 , such that it can be utilized by Wl and
the TS nonlinearity. Therefore, the representation learning here becomes indirect. To simultaneously
control the representation size, one can easily let n?l < nl nl?1 such that Wl? becomes ?compressive.?
Interestingly, we find n?l = nl often works surprisingly well, which suggests linearly compressing
the expanded TS representation back to the size of an SS representation can still retain its advantage,
and thus is used as the default. This scheme can also be combined with inverted backpropagation if
learning Wl is still desired.
To understand why linear compression does not remove the TS representation power, we note that it is
not equivalent to the linear contraction operation , where each tensor-valued unit is down projected
independently. Linear compression introduces extra interaction between tensor-valued units. Another
way to view the linear compression?s role is through kernel analysis as shown in Sec. 3?adding a
linear layer does not change the shape of a given TS kernel.
5
Experimental Results
Our experiments focus on comparing TS and SS networks with the goal of determining how the TS
nonlinearities differ from their SS counterparts. SVMs using SS-ReLU and TS-ReLU kernels are
implemented in Matlab based on libsvm-compact [22]. TS networks and all 3 learning algorithms in
Sec. 4 are implemented in Python based on Numpy?s ndarray data structure. Both implementations
utilize multicore CPU acceleration. In addition, TS networks with only the linear rotation-compression
learning are also implemented in Keras, which enjoys much faster GPU acceleration.
We adopt 3 datasets, viz. MNIST, CIFAR10 and SVHN2, where we reserve the last 5,000 training
images for validation. We also include SVHN2?s extra training set (except for SVMs8 ) in the training
process, and zero-pad MNIST images such that all datasets have the same spatial resolution?32 ? 32.
8
Due to the prohibitive kernel matrix size, as SVMs here can only be solved in the dual form.
6
Table 1: Error rate (%) and run time (?) comparison.
MNIST
? Asymptotic
Error RateDepth
One-pass
CIFAR10
? Asymptotic
One-pass
? 1.405
? 1.403
SS SVM
TS SVM
16.342
2.991
3.332
3.331
SS MLP
TS MLP RR
TS MLP LRC
TS MLP IBP-LRC
? 43.187
? 43.602
? 2.363
?
? 2.062
? 2.331
66.411
47.711
55.691
55.691
43.743+1 ? 1.084+2
3.855+3 ? 0.866+2
SS CNN
TS CNN LRC
SVHN2
? Asymptotic
? 46.912
?
? 46.872
? 45.862
? 21.601
? 20.381
1.0
2.1
? 12.203
?
? 12.583
? 12.633
1.0
156.2
11.7
17.4
13.697+1 ? 4.966+1
9.137+3 ? 5.066+3
1.0
2.0
30.243
27.111
20.422
20.202
74.843+3 ? 26.735+2
54.403+3 ? 25.748+3
Time
One-pass
RR = One-Pass Ridge Regression, LRC = Linear Rotation-Compression, IBP = Inverted Backpropagation.
16%
CIFAR10
80%
SVHN2
SS
SS
SS
TS
TS
TS
70%
9%
60%
Error Rate
Asymptotic Error Rate ?
MNIST
4%
1%
L=3+1
L=6+2
L=9+3
L=3+1
L=6+2
L=9+3
50%
40%
0%
30%
-1%
-4%
0%
4%
16%
36%
64% 100%
10
One-pass Error Rate ?
100
1000
Seconds
Figure 4: Comparison of SS CNN and TS CNN LRC models. (Left) Each dot?s coordinate indicates
the differences of one-pass and asymptotic error rates between one pair of SS CNN and TS CNN
LRC models sharing the same hyperparameters. The first quadrant shows where the TS CNN LRC is
better in both errors. (Right) Validation error rates v.s. training time on CIFAR10 from the shallower,
intermediate and deeper models.
For SVMs, we grid search for both kernels with depth from 1 to 10, C from 1 to 1, 000, and PCA
dimension reduction of the images to 32, 64, 128, 256, or no reduction. For SS and TS networks with
fully-connected (i.e. MLP) architectures, we grid search for depth from 1 to 3 and width (including
PCA of the input) from 32 to 256 based on our Python implementation. For SS and TS networks with
convolutional (i.e. CNN) architectures, we adopt VGG-style [15] convolutional layers with 3 standard
SS convolution-max pooling blocks,9 where each block can have up to three 3 ? 3 convolutions,
plus 1 to 3 fully-connected SS or TS layers of fixed width 256. CNN experiments are based on our
Keras implementation. For all MLPs and CNNs, we universally use SGD with learning rate 10?3 ,
momentum 0.9, L2 weight decay 10?3 and batch size 128 to reduce the grid search complexity by
focusing on architectural hyperparameters. All networks are trained for 100 epochs on MNIST and
CIFAR10, and 20 epochs on SVHN2, without data augmentation. The source code and scripts for
reproducing our experiments are available at https://github.com/coxlab/tsnet.
Table 1 summarizes our experimental results, including both one-pass (i.e. first-epoch) and asymptotic
(i.e. all-epoch) error rates and the corresponding depths (for CNNs, convolutional and fully-connected
layers are listed separately). The TS nonlinearities perform better in almost all categories, confirming
our theoretical insights in Sec. 2.3?the cross-layer analysis (as evidenced by their low error rates
after only one epoch of training), the error-correcting analysis (on MNIST and CIFAR10, for instance,
the one-pass error rates of TS MLP RR using fixed random activation are close to the asymptotic
error rates of TS MLP LRC and IBP-LRC with trained activation), and the fine-grained analysis (the
TS networks in general achieve better asymptotic error rates than their SS counterparts).
9
This decision mainly is to accelerate the experimental process, since TS convolution runs much slower, but
we also observe that TS nonlinearities in lower layers are not always helpful. See later for more discussion.
7
Backpropagation (SS MLP)
Inverted Backpropagation (TS MLP IBP)
Figure 5: Visualization of filters learned on (Top) MNIST, (Middle) CIFAR10 and (Bottom) SVHN2.
To further demonstrate how using TS nonlinearities affects the distribution of performance across
different architectures (here, mainly depth), we plot the performance gains (viz. one-pass and asymptotic error rates) introduced by using the TS nonlinearities on all CNN variants in Fig. 4. The fact
that most dots are in the first quadrant (and none in the third quadrant) suggests the TS nonlinearities
are predominantly beneficial. Also, to ease the concern that the TS networks? higher complexity may
simply consume their advantage on actual run time, we also provide examples of learning progress
(i.e. validation error rate) over run time in Fig. 4. The results suggest that even our unoptimized TS
network implementation can still provide sizable gains in learning speed.
Finally, to verify the effectiveness of inverted backpropagation in learning useful activation filters
even without the actual gradient, we train single-hidden-layer SS and TS MLPs with 16 hidden units
each (without using PCA dimension reduction of the input) and visualize the learned filters in Fig. 5.
The results suggest inverted backpropagation functions equally well.
6
Discussion
Why do TS networks learn quickly? In general, the TS network sidesteps the vanishing gradient
problem as it skips the long chain of linear contractions against the analysis weights (i.e. the auxiliary
pathway in Fig. 3). Its linear readout has direct access to the full input vector, which is switched to
different parts of the highly expressive expanded representation. This directly accelerates learning.
Also, a well-flowing gradient confers benefits beyond the TS layers?e.g. SS layers placed before TS
layers also learn faster since the TS layers ?self-organize? rapidly, permitting useful error signals to
flow to the lower layers faster.10 Lastly, when using the inverted backpropagation or linear rotationcompression learning, although {Wl } or {Wl? } do not learn as fast as WZ , and may still be quite
random in the first few epochs, the error-correcting nature of WZ can still compensate for the learning
progress.
Challenges toward deeper TS networks. As shown in Fig. 2, the equivalent kernels of deeper TS
networks can be extremely sharp and discriminative, which unavoidably hurts invariant recognition of
dissimilar examples. This may explain why we find having TS nonlinearities in only higher (instead
of all) layers works better, since the lower SS layers can form invariant representations for the higher
TS layers to classify. To remedy this, we may need to consider other types of regularization for WZ
(instead of L2 ) or other smoothing techniques [25, 26].
Future work. Our main future direction is to improve the TS network?s scalability, which may require
more parallelism (e.g. multi-GPU processing) and more customization (e.g. GPU kernels utilizing the
sparsity of TS representations), with preferably more memory storage/bandwidth (e.g. GPUs using
3D-stacked memory). With improved scalability, we also plan to further verify the TS nonlinearity?s
efficiency in state-of-the-art architectures [27, 9, 18], which are still computationally prohibitive with
our current implementation.
Acknowledgments
We would like to thank James Fitzgerald, Mien ?Brabeeba? Wang, Scott Linderman, and Yu Hu for
fruitful discussions. We also thank the anonymous reviewers for their valuable comments. This work
was supported by NSF (IIS 1409097), IARPA (contract D16PC00002), and the Swartz Foundation.
10
This is a crucial aspect of gradient descent dynamics in layered structures, which behave like a chain?the
weakest link must change first [23, 24].
8
References
[1] Y. LeCun, Y. Bengio, and G. Hinton, ?Deep learning,? Nature, 2015.
[2] J. Schmidhuber, ?Deep learning in neural networks: An overview,? Neural Networks, 2015.
[3] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, and S. Seung, ?Digital selection and analogue
amplification coexist in a cortex-inspired silicon circuit,? Nature, 2000.
[4] V. Nair and G. Hinton, ?Rectified Linear Units Improve Restricted Boltzmann Machines,? in ICML, 2010.
[5] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber, ?Gradient Flow in Recurrent Nets: the Difficulty
of Learning Long-Term Dependencies,? in A Field Guide to Dynamical Recurrent Networks, 2001.
[6] A. Courville, J. Bergstra, and Y. Bengio, ?A Spike and Slab Restricted Boltzmann Machine,? in AISTATS,
2011.
[7] K. Konda, R. Memisevic, and D. Krueger, ?Zero-bias autoencoders and the benefits of co-adapting
features,? in ICLR, 2015.
[8] R. Srivastava, K. Greff, and J. Schmidhuber, ?Training Very Deep Networks,? in NIPS, 2015.
[9] K. He, X. Zhang, S. Ren, and J. Sun, ?Identity Mappings in Deep Residual Networks,? in ECCV, 2016.
[10] M. Janzamin, H. Sedghi, and A. Anandkumar, ?Beating the Perils of Non-Convexity: Guaranteed Training
of Neural Networks using Tensor Methods,? arXiv, 2015.
[11] L. Deng and D. Yu, ?Deep Convex Net: A Scalable Architecture for Speech Pattern Classification,? in
Interspeech, 2011.
[12] B. Amos and Z. Kolter, ?Input-Convex Deep Networks,? in ICLR Workshop, 2015.
[13] ?. Aslan, X. Zhang, and D. Schuurmans, ?Convex Deep Learning via Normalized Kernels,? in NIPS, 2014.
[14] S. Wang, A. Mohamed, R. Caruana, J. Bilmes, M. Plilipose, M. Richardson, K. Geras, G. Urban, and
O. Aslan, ?Analysis of Deep Neural Networks with the Extended Data Jacobian Matrix,? in ICML, 2016.
[15] K. Simonyan and A. Zisserman, ?Very Deep Convolutional Networks for Large-Scale Image Recognition,?
in ICLR, 2015.
[16] I. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, and Y. Bengio, ?Maxout Networks,? in ICML,
2013.
[17] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and
A. Rabinovich, ?Going Deeper with Convolutions,? in CVPR, 2015.
[18] G. Huang, Z. Liu, and K. Weinberger, ?Densely Connected Convolutional Networks,? arXiv, 2016.
[19] S. Sonoda and N. Murata, ?Neural network with unbounded activation functions is universal approximator,?
Applied and Computational Harmonic Analysis, 2015.
[20] Y. Cho and L. Saul, ?Large-Margin Classification in Infinite Neural Networks,? Neural Computation, 2010.
[21] D. Duvenaud, O. Rippel, R. Adams, and Z. Ghahramani, ?Avoiding pathologies in very deep networks,? in
AISTATS, 2014.
[22] J. And?n and S. Mallat, ?Deep Scattering Spectrum,? IEEE T-SP, 2014.
[23] A. Saxe, J. McClelland, and S. Ganguli, ?Exact solutions to the nonlinear dynamics of learning in deep
linear neural networks,? in ICLR, 2014.
[24] A. Saxe, ?A deep learning theory of perceptual learning dynamics,? in COSYNE, 2015.
[25] T. Miyato, S. Maeda, M. Koyama, K. Nakae, and S. Ishii, ?Distributional Smoothing with Virtual
Adversarial Training,? in ICLR, 2016.
[26] Q. Bai, S. Rosenberg, Z. Wu, and S. Sclaroff, ?Differential Geometric Regularization for Supervised
Learning of Classifiers,? in ICML, 2016.
[27] J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, ?Striving for Simplicity: The All
Convolutional Net,? in ICLR Workshop, 2015.
9
| 6516 |@word cox:1 briefly:1 cnn:10 compression:6 advantageous:1 middle:1 hu:1 propagate:1 lobe:1 contraction:11 pick:1 sgd:1 reduction:4 bai:1 liu:2 rippel:1 tuned:2 interestingly:2 rightmost:1 current:1 comparing:1 com:1 activation:24 tackling:1 written:1 readily:1 must:3 gpu:4 wx:4 confirming:1 shape:2 remove:1 plot:2 n0:9 prohibitive:2 accordingly:1 isotropic:1 vanishing:6 lrc:9 location:2 lx:5 simpler:1 zhang:2 unbounded:1 direct:3 differential:1 rnl:6 qualitative:2 pathway:4 inside:1 introduce:1 x0:17 indeed:2 behavior:3 multi:1 brain:1 inspired:3 cpu:1 actual:2 window:1 becomes:3 spain:1 provided:1 linearity:1 project:1 circuit:1 medium:2 rn0:1 geras:1 substantially:1 compressive:1 impractical:1 pseudo:2 preferably:1 growth:1 exactly:2 decouples:1 classifier:6 scaled:1 control:3 zl:14 unit:37 appear:1 organize:1 arguably:1 nakae:1 before:1 limit:2 severely:1 switching:11 might:1 zeroth:1 plus:1 studied:1 equivalence:1 suggests:2 challenging:1 co:2 ease:1 collapse:1 range:1 acknowledgment:1 lecun:1 ternary:1 recursive:1 block:2 implement:3 backpropagation:16 procedure:1 riedmiller:1 universal:2 significantly:1 adapting:1 projection:1 word:1 quadrant:3 suggest:2 get:1 onto:1 convenience:1 layered:3 selection:2 close:3 amplify:1 collapsed:1 operator:1 influence:1 storage:1 coexist:1 equivalent:12 map:2 confers:1 center:1 dz:1 fruitful:1 go:2 reviewer:1 l:1 convex:5 independently:1 resolution:1 simplicity:2 correcting:3 insight:2 utilizing:1 enabled:1 fx:2 coordinate:1 hurt:1 construction:3 tan:1 heavily:1 mallat:1 exact:1 distinguishing:1 us:1 goodfellow:1 origin:1 harvard:2 element:1 recognition:2 utilized:1 distributional:1 bottom:3 observed:1 role:1 reducible:1 solved:1 wang:2 thousand:1 readout:9 compressing:1 connected:6 sun:1 trade:1 valuable:1 substantial:1 intuition:1 unlearned:1 complexity:4 moderately:2 constrains:1 fitzgerald:1 seung:1 warde:1 convexity:1 dynamic:3 trained:3 rewrite:1 solving:1 upon:1 efficiency:1 easily:3 accelerate:1 indirect:3 k0:1 differently:1 various:1 represented:1 sarpeshkar:1 train:4 stacked:1 fast:1 describe:2 activate:8 choosing:1 quite:1 encoded:2 larger:1 valued:3 cvpr:1 supplementary:1 solve:1 otherwise:4 s:57 unsupported:1 ability:2 consume:1 simonyan:1 richardson:1 noisy:1 final:3 online:1 sequence:2 advantage:2 rr:3 net:3 interaction:1 product:5 hadamard:1 unavoidably:1 rapidly:2 poorly:1 achieve:1 amplification:1 scalability:2 zp:1 adam:1 converges:2 help:1 derive:3 andrew:1 develop:3 recurrent:2 multicore:1 nearest:1 ibp:4 progress:2 sizable:1 czz:3 implemented:5 auxiliary:4 skip:3 implies:1 come:1 differ:2 direction:1 discontinuous:1 filter:4 cnns:2 saxe:3 material:1 implementing:2 noticeably:1 virtual:1 argued:1 require:2 fix:1 generalization:2 anonymous:1 sufficiently:1 duvenaud:1 mapping:1 bj:2 visualize:1 slab:1 reserve:1 adopt:2 severed:2 amplifying:1 wl:35 vice:1 reflects:1 amos:1 clearly:1 always:2 gaussian:3 aim:1 rather:2 pn:1 rosenberg:1 focus:1 viz:2 properly:1 consistently:2 rank:2 indicates:1 mainly:3 contrast:2 ishii:1 adversarial:1 helpful:1 inference:1 ganguli:1 accumulated:1 entire:7 a0:10 pad:1 hidden:38 unoptimized:1 quasi:1 going:1 classification:4 dual:1 denoted:1 plan:1 spatial:1 smoothing:2 fairly:1 brox:1 art:1 equal:1 once:3 construct:1 having:3 frasconi:1 field:1 yu:2 icml:4 future:2 mirza:1 dosovitskiy:1 few:1 modern:1 randomly:3 preserve:2 simultaneously:1 densely:1 numpy:1 n1:5 suit:1 mlp:9 highly:4 introduces:1 extreme:1 nl:14 yielding:2 undefined:1 behind:1 activated:1 asaxe:1 hiddenlayer:1 farley:1 chain:3 closer:1 cifar10:7 janzamin:1 filled:1 desired:1 theoretical:2 instance:1 classify:1 modeling:1 earlier:1 caruana:1 rabinovich:1 mahowald:1 cost:4 deviation:1 too:1 dependency:1 combined:2 cho:1 peak:1 retain:2 memisevic:1 contract:1 off:2 quickly:1 w1:22 augmentation:1 central:1 rn1:4 huang:1 slowly:1 cosyne:1 collapsing:1 derivative:3 style:1 sidestep:1 szegedy:1 potential:1 nonlinearities:10 bergstra:1 summarized:1 sec:8 kolter:1 caused:1 depends:2 performed:2 view:2 script:1 later:1 analyze:1 recover:2 jia:1 contribution:1 minimize:1 mlps:2 convolutional:9 murata:1 yield:1 gathered:1 peril:1 generalize:1 none:1 ren:1 bilmes:1 rectified:3 explain:1 reach:3 sharing:1 definition:2 inexact:1 against:5 nonetheless:1 mohamed:1 james:1 conveys:5 naturally:1 transmits:2 associated:1 proof:2 gain:2 dataset:1 popular:1 carefully:1 back:6 focusing:1 scattering:1 higher:4 supervised:1 flowing:1 improved:1 zisserman:1 formulation:1 done:1 though:2 just:2 stage:1 lastly:1 correlation:1 autoencoders:1 expressive:6 replacing:1 nonlinear:3 propagation:1 lack:1 grows:1 usage:1 effect:1 normalized:1 concept:1 remedy:1 counterpart:3 verify:2 hence:7 analytically:1 regularization:3 alternating:2 moore:1 width:7 self:1 interspeech:1 generalized:1 ridge:7 demonstrate:2 greff:1 image:4 harmonic:1 novel:2 krueger:1 predominantly:1 rotation:3 overview:1 interpretation:2 he:1 elementwise:1 numerically:1 silicon:1 anguelov:1 cambridge:1 vec:7 ai:1 versa:1 grid:3 nonlinearity:5 pathology:1 dot:4 access:4 impressive:1 longer:2 cortex:1 recent:3 apart:1 scenario:1 schmidhuber:3 certain:1 continue:1 success:2 approximators:1 inverted:12 transmitted:1 additional:1 greater:1 deng:1 swartz:1 signal:2 ii:1 multiple:1 full:2 faster:5 match:3 cross:4 offer:2 compensate:2 long:2 post:1 equally:1 visit:1 permitting:1 a1:3 converging:1 variant:1 regression:7 scalable:1 expectation:1 arxiv:2 kernel:39 represent:1 hochreiter:1 receive:1 addition:3 fine:2 separately:1 else:1 leaving:1 source:1 crucial:1 extra:2 unlike:2 comment:1 pooling:4 sent:2 flow:4 spirit:1 effectiveness:1 call:1 anandkumar:1 kera:2 leverage:1 intermediate:1 split:1 easy:2 relaxes:1 bengio:4 affect:1 relu:50 architecture:12 bandwidth:1 reduce:1 idea:2 vgg:1 inactive:1 expression:1 pca:3 render:1 speech:1 sever:1 matlab:1 deep:39 useful:3 generally:1 listed:1 cleaner:1 amount:1 chuan:1 svms:4 processed:1 category:1 reduced:1 http:1 fz:2 mcclelland:1 nsf:1 per:1 write:1 drawn:2 urban:1 douglas:1 clean:2 libsvm:1 utilize:1 fraction:1 run:4 angle:4 everywhere:3 springenberg:1 almost:1 reasonable:1 architectural:1 wu:1 patch:1 draw:1 decision:7 summarizes:1 scaling:1 accelerates:1 layer:46 guaranteed:1 courville:2 copied:5 activity:9 sharply:1 aspect:1 speed:1 extremely:2 expanded:11 gpus:1 alternate:1 representable:1 across:5 beneficial:1 shallow:2 severing:1 intuitively:2 invariant:2 restricted:2 computationally:1 equation:1 resource:2 visualization:1 count:1 end:1 generalizes:3 operation:6 available:1 permit:2 linderman:1 observe:1 generic:1 appropriate:1 batch:1 weinberger:1 slower:1 original:1 top:4 responding:1 denotes:6 include:1 miyato:1 konda:1 exploit:1 k1:2 build:1 especially:1 ghahramani:1 tensor:23 flattens:1 spike:1 fa:1 gradient:17 iclr:6 separate:1 thank:2 separating:1 capacity:1 link:1 koyama:1 unstable:1 toward:2 sedghi:1 length:1 code:1 reed:1 providing:1 sermanet:1 mostly:1 sharper:1 negative:3 implementation:5 boltzmann:2 perform:2 shallower:1 neuron:2 convolution:4 datasets:5 descent:1 t:111 behave:1 hinton:2 ever:2 extended:1 reproducing:1 arbitrary:2 sharp:1 expressiveness:2 david:1 evidenced:1 pair:1 introduced:1 kl:6 z1:8 connection:2 optimized:1 omits:1 learned:3 barcelona:1 flowchart:1 nip:3 address:1 able:1 beyond:2 parallelism:1 pattern:2 scott:1 dynamical:1 beating:1 regime:1 sparsity:1 challenge:1 maeda:1 including:4 wz:12 max:5 memory:5 analogue:1 power:1 suitable:2 difficulty:1 solvable:1 recursion:1 residual:1 scheme:9 improve:2 github:1 library:1 epoch:6 geometric:2 l2:3 python:2 determining:1 relative:1 asymptotic:9 reordering:1 fully:5 loss:2 highlight:1 aslan:2 approximator:1 validation:3 foundation:1 switched:2 digital:1 degree:2 vanhoucke:1 proxy:1 storing:1 bypass:1 row:6 eccv:1 compatible:1 supported:2 surprisingly:1 copy:2 last:1 placed:1 enjoys:1 guide:1 weaker:1 deeper:7 allow:1 understand:1 wide:3 fall:1 neighbor:1 bias:1 saul:1 benefit:2 depth:13 dimension:3 valid:1 avoids:2 rich:1 default:1 made:1 projected:1 universally:1 lz:7 erhan:1 compact:1 implicitly:1 keep:1 pseudoinverse:1 active:11 reveals:1 reorder:1 discriminative:1 spectrum:1 search:3 why:3 table:2 learn:6 nature:3 improving:1 schuurmans:1 expansion:8 complex:1 aistats:2 sp:1 main:1 linearly:2 motivation:1 parcellated:2 hyperparameters:2 iarpa:1 succinct:1 convey:3 x1:8 fig:8 slow:1 precision:1 position:2 momentum:1 explicit:1 exponential:2 xl:14 winning:1 lie:1 perceptual:1 third:1 jacobian:1 learns:7 grained:2 down:2 z0:3 bad:2 divergent:1 svm:3 decay:1 striving:1 evidence:2 concern:2 weakest:1 workshop:2 mnist:7 adding:2 effectively:2 ci:1 conditioned:1 margin:1 sclaroff:1 customization:1 simply:4 infinitely:1 penrose:1 yung:1 expressed:1 scalar:7 ma:1 nair:1 conditional:1 hahnloser:1 sized:2 goal:1 identity:1 acceleration:3 careful:1 maxout:2 replace:1 feasible:1 change:3 determined:2 except:2 infinite:7 pas:16 experimental:5 select:1 support:1 rotate:1 dissimilar:3 tsai:1 heaviside:2 avoiding:3 srivastava:1 |
6,099 | 6,517 | The non-convex Burer?Monteiro approach works
on smooth semide?nite programs
Vladislav Voroninski?
Department of Mathematics
Massachusetts Institute of Technology
vvlad@math.mit.edu
Nicolas Boumal?
Department of Mathematics
Princeton University
nboumal@math.princeton.edu
Afonso S. Bandeira
Department of Mathematics and Center for Data Science
Courant Institute of Mathematical Sciences, New York University
bandeira@cims.nyu.edu
Abstract
Semide?nite programs (SDP?s) can be solved in polynomial time by interior point
methods, but scalability can be an issue. To address this shortcoming, over a
decade ago, Burer and Monteiro proposed to solve SDP?s with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these nonconvex surrogates reliably. Although some theory supports this empirical success,
a complete explanation of it remains an open question. In this paper, we consider a
class of SDP?s which includes applications such as max-cut, community detection
in the stochastic block model, robust PCA, phase retrieval and synchronization of
rotations. We show that the low-rank Burer?Monteiro formulation of SDP?s in
that class almost never has any spurious local optima.
1
Introduction
We consider semide?nite programs (SDP?s) of the form
f? =
min ?C, X?
X?Sn?n
subject to
A(X) = b, X ? 0,
(SDP)
where ?C, X? = Tr(C ?X), C ? Sn?n is the symmetric cost matrix, A : Sn?n ? Rm is a linear operator capturing m equality constraints with right hand side b ? Rm and the variable X is
symmetric, positive semide?nite. Interior point methods solve (SDP) in polynomial time [Nesterov,
2004]. In practice however, for n beyond a few thousands, such algorithms run out of memory (and
time), prompting research for alternative solvers.
If (SDP) has a compact search space, then it admits a global optimum of rank at most r, where
r(r+1)
? m [Pataki, 1998, Barvinok, 1995]. Thus, if one restricts the search space of (SDP) to
2
matrices of rank at most p with p(p+1)
? m, then the globally optimal value remains unchanged.
2
This restriction is easily enforced by factorizing X = Y Y ? where Y has size n ? p, yielding an
equivalent quadratically constrained quadratic program:
q ? = min ?CY , Y ?
Y ?Rn?p
subject to
A(Y Y ?) = b.
?The ?rst two authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(P)
In general, (P) is non-convex, making it a priori unclear how to solve it globally. Still, the bene?ts
are that it is lower dimensional than (SDP) and has no conic constraint. This has motivated Burer
and Monteiro [2003, 2005] to try and solve (P) using local optimization methods, with surprisingly
good results. They developed theory in support of this observation (details below). About their
results, Burer and Monteiro [2005, ?3] write (mutatis mutandis):
?How large must we take p so that the local minima of (P) are guaranteed to map
>m
to global minima of (SDP)? Our theorem asserts that we need only1 p(p+1)
2
(with the important caveat that positive-dimensional faces of (SDP) which are
??at? with respect to the objective function can harbor non-global local minima).?
The caveat?the existence or non-existence of non-global local optima, or their potentially adverse
effect for local optimization algorithms?was not further discussed.
In this paper, assuming p(p+1)
> m, we show that if the search space of (SDP) is compact and if
2
the search space of (P) is a smooth manifold, then, for almost all cost matrices C, if Y satis?es ?rstand second-order necessary optimality conditions for (P), then Y is a global optimum of (P) and,
? m, X = Y Y ? is a global optimum of (SDP); in other words, ?rst- and secondsince p(p+1)
2
order necessary optimality conditions for (P) are also suf?cient for global optimality?an unusual
theoretical guarantee in non-convex optimization.
Notice that this is a statement about the optimization problem itself, not about speci?c algorithms.
Interestingly, known algorithms for optimization on manifolds converge to second-order critical
points,2 regardless of initialization [Boumal et al., 2016].
For the speci?ed class of SDP?s, our result improves on those of [Burer and Monteiro, 2005] in
two important ways. Firstly, for almost all C, we formally exclude the existence of spurious local
optima.3 Secondly, we only require the computation of second-order critical points of (P) rather
than local optima (which is hard in general [Vavasis, 1991]). Below, we make a statement about
computational complexity, and we illustrate the practical ef?ciency of the proposed methods through
numerical experiments.
SDP?s which satisfy the compactness and smoothness assumptions occur in a number of applications including Max-Cut, robust PCA, Z2 -synchronization, community detection, cut-norm approximation, phase synchronization, phase retrieval, synchronization of rotations and the trust-region
subproblem?see Section 4 for references.
A simple example: the Max-Cut problem
Given an undirected graph, Max-Cut is the NP-hard problem of clustering the n nodes of this graph
in two classes, +1 and ?1, such that as many edges as possible join nodes of different signs. If C is
the adjacency matrix of the graph, Max-Cut is expressed as
n
1 ?
maxn
Cij (1 ? xi xj ) s.t. x21 = ? ? ? = x2n = 1.
(Max-Cut)
x?R 4
i,j=1
Introducing the positive semide?nite matrix X = xx?, both the cost and the constraints may be expressed linearly in terms of X. Ignoring that X has rank 1 yields the well-known convex relaxation
in the form of a semide?nite program (up to an af?ne transformation of the cost):
min
?C, X? s.t. diag(X) = 1, X ? 0.
(Max-Cut SDP)
n?n
X?S
If a solution X of this SDP has rank 1, then X = xx? for some x which is then an optimal cut. In the
general case of higher rank X, Goemans and Williamson [1995] exhibited the celebrated rounding
scheme to produce approximately optimal cuts (within a ratio of .878) from X.
1
The condition on p and m is slightly, but inconsequentially, different in [Burer and Monteiro, 2005].
Second-order critical points satisfy ?rst- and second-order necessary optimality conditions.
3
Before Prop. 2.3 in [Burer and Monteiro, 2005], the authors write: ?The change of variables X = Y Y ?
does not introduce any extraneous local minima.? This is sometimes misunderstood to mean (P) does not have
spurious local optima, when it actually means that the local optima of (P) are in exact correspondence with the
local optima of ?(SDP) with the extra constraint rank(X) ? p,? which is also non-convex and thus also liable
to having local optima. Unfortunately, this misinterpretation has led to some confusion in the literature.
2
2
The corresponding Burer?Monteiro non-convex problem with rank bounded by p is:
min ?CY , Y ?
Y ?Rn?p
s.t.
diag(Y Y ?) = 1.
(Max-Cut BM)
The constraint diag(Y Y ?) = 1 requires each row of Y to have unit norm; that is: Y is a point on the
Cartesian product of n unit spheres in Rp , which is a smooth manifold. Furthermore, all X feasible
for the SDP have identical trace equal to n, so that the search space of the SDP is compact. Thus,
our results stated below apply:
?? ?
For p =
2n , for almost all C, even though (Max-Cut BM) is non-convex, any
local optimum Y is a global optimum (and so is X = Y Y ?), and all saddle points
have an escape (the Hessian has a negative eigenvalue).
We note that, for p > n/2, the same holds for all C [Boumal, 2015].
Notation
Sn?n is the set of real, symmetric matrices of size n. A symmetric matrix X is positive semide?nite
(X ? 0) if and only if u?Xu ? 0 for all u ? Rn . For matrices A, B, the standard
? Euclidean inner
?
product is ?A, B? = Tr(A B). The associated (Frobenius) norm is ?A? = ?A, A?. Id is the
identity operator and In is the identity matrix of size n.
2
Main results
Our main result establishes conditions under which ?rst- and second-order necessary optimality
conditions for (P) are suf?cient for global optimality. Under those conditions, it is a fortiori true that
global optima of (P) map to global optima of (SDP), so that local optimization methods on (P) can
be used to solve the higher-dimensional, cone-constrained (SDP).
We now specify the necessary optimality conditions of (P). Under the assumptions of our main
result below (Theorem 2), the search space
M = Mp = {Y ? Rn?p : A(Y Y ?) = b}
(1)
TY M = {Y? ? Rn?p : A(Y? Y ? + Y Y? ?) = 0}.
(2)
is a smooth and compact manifold. As such, it can be linearized at each point Y ? M by a tangent
space, simply by differentiating the constraints [Absil et al., 2008, eq. (3.19)]:
Endowing the tangent spaces of M with the (restricted) Euclidean metric ?A, B? = Tr(A?B) turns
M into a Riemannian submanifold of Rn?p . In general, second-order optimality conditions can
be intricate to handle [Ruszczy?nski, 2006]. Fortunately, here, the smoothness of both the search
space (1) and the cost function
f (Y ) = ?CY , Y ?
(3)
make for straightforward conditions. In spirit, they coincide with the well-known conditions for unconstrained optimization. As further detailed in Appendix A, the Riemannian gradient gradf (Y ) is
the orthogonal projection of the classical gradient of f to the tangent space TY M. The Riemannian
Hessian of f at Y is a similarly restricted version of the classical Hessian of f to the tangent space.
De?nition 1. A (?rst-order) critical point for (P) is a point Y ? M such that
(1st order nec. opt. cond.)
gradf (Y ) = 0,
where gradf (Y ) ? TY M is the Riemannian gradient at Y of f restricted to M. A second-order
critical point for (P) is a critical point Y such that
(2nd order nec. opt. cond.)
Hessf (Y ) ? 0,
where Hessf (Y ) : TY M ? TY M is the Riemannian Hessian at Y of f restricted to M (a symmetric linear operator).
3
Proposition 1. All local (and global) optima of (P) are second-order critical points.
Proof. See [Yang et al., 2014, Rem. 4.2 and Cor. 4.2].
We can now state our main result. In the theorem statement below, ?for almost all C? means potentially troublesome cost matrices form at most a (Lebesgue) zero-measure subset of Sn?n , in the
same way that almost all square matrices are invertible. In particular, given any matrix C ? Sn?n ,
perturbing C to C + ?W where W is a Wigner random matrix results in an acceptable cost matrix
with probability 1, for arbitrarily small ? > 0.
Theorem 2. Given constraints A : Sn?n ? Rm , b ? Rm and p satisfying
p(p+1)
2
> m, if
(i) the search space of (SDP) is compact; and
(ii) the search space of (P) is a smooth manifold,
then for almost all cost matrices C ? Sn?n , any second-order critical point of (P) is globally
optimal. Under these conditions, if Y is globally optimal for (P), then the matrix X = Y Y ? is
globally optimal for (SDP).
The assumptions are discussed in the next section. The proof?see Appendix A?follows directly
from the combination of two intermediate results:
1. If Y is rank de?cient and second-order critical for (P), then it is globally optimal and
X = Y Y ? is optimal for (SDP); and
2. If
p(p+1)
2
> m, then, for almost all C, every ?rst-order critical Y is rank-de?cient.
The ?rst step holds in a more general context, as previously established by Burer and Monteiro
[2003, 2005]. The second step is new and crucial, as it allows to formally exclude the existence of
spurious local optima, generically in C, thus resolving the caveat mentioned in the introduction.
The smooth structure of (P) naturally suggests using Riemannian optimization to solve it [Absil et al.,
2008], which is something that was already proposed by Journ?ee et al. [2010] in the same context.
Importantly, known algorithms converge to second-order critical points regardless of initialization.
We state here a recent computational result to that effect.
Proposition 3. Under the numbered assumptions of Theorem 2, the Riemannian trust-region method
(RTR) [Absil et al., 2007] initialized with any Y0 ? M returns in O(1/?2g ?H + 1/?3H ) iterations a
point Y ? M such that
f (Y ) ? f (Y0 ),
?gradf (Y )? ? ?g ,
and
Hessf (Y ) ? ??H Id .
Proof. Apply the main results of [Boumal et al., 2016] using that f has locally Lipschitz continuous
gradient and Hessian in Rn?p and M is a compact submanifold of Rn?p .
Essentially, each iteration of RTR requires evaluation of one cost and one gradient, a bounded number of Hessian-vector applications, and one projection from Rn?p to M. In many important cases,
this projection amounts to Gram?Schmidt orthogonalization of small blocks of Y ?see Section 4.
Proposition 3 bounds worst-case iteration counts for arbitrary initialization. In practice, a good
initialization point may be available, making the local convergence rate of RTR more informative.
For RTR, one may expect superlinear or even quadratic local convergence rates near isolated local
minimizers [Absil et al., 2007]. While minimizers are not isolated in our case [Journ?ee et al., 2010],
experiments show a characteristically superlinear local convergence rate in practice [Boumal, 2015].
This means high accuracy solutions can be achieved, as demonstrated in Appendix B.
Thus, under the conditions of Theorem 2, generically in C, RTR converges to global optima. In
practice, the algorithm returns after a ?nite number of steps, and only approximate second-order
criticality is guaranteed. Hence, it is interesting to bound the optimality gap in terms of the approximation quality. Unfortunately, we do not establish such a result for small p. Instead, we give an
a posteriori computable optimality gap bound which holds for all p and for all C. In the following
statement, the dependence of M on p is explicit, as Mp . The proof is in Appendix A.
4
Theorem 4. Let R < ? be the maximal trace of any X feasible for (SDP). For any p such that Mp
and Mp+1 are smooth manifolds (even if p(p+1)
? m) and for any Y ? Mp , form Y? = [Y |0n?1 ]
2
in Mp+1 . The optimality gap at Y is bounded as
?
(4)
0 ? 2(f (Y ) ? f ? ) ? R?gradf (Y )? ? R?min (Hessf (Y? )).
If all feasible X have the same trace R and there exists a positive de?nite feasible X, then the bound
simpli?es to
0 ? 2(f (Y ) ? f ? ) ? ?R?min (Hessf (Y? ))
(5)
so that ?gradf (Y )? needs not be controlled explicitly. If p > n, the bounds hold with Y? = Y .
In particular, for p = n + 1, the bound can be controlled a priori: approximate second-order critical
points are approximately optimal, for any C.4
Corollary 5. Under the assumptions of Theorem 4, if p = n + 1 and Y ? M satis?es both
?gradf (Y )? ? ?g and Hessf (Y ) ? ??H Id, then Y is approximately optimal in the sense that
?
0 ? 2(f (Y ) ? f ? ) ? R?g + R?H .
Under the same condition as in Theorem 4, the bound can be simpli?ed to R?H .
This works well with Proposition 3. For any p, equation (4) also implies the following:
?
2(f (Y ) ? f ? ) ? R?gradf (Y )?
?
.
?min (Hessf (Y )) ? ?
R
That is, for any p and any C, an approximate critical point Y in Mp which is far from optimal maps
to a comfortably-escapable approximate saddle point Y? in Mp+1 .
This suggests an algorithm as follows. For a starting value of p such that Mp is a manifold, use
RTR to compute an approximate second-order critical point Y . Then, form Y? in Mp+1 and test
the left-most eigenvalue of Hessf (Y? ).5 If it is close enough to zero, this provides a good bound
on the optimality gap. If not, use an (approximate) eigenvector associated to ?min (Hessf (Y? )) to
escape the approximate saddle point and apply RTR from that new point in Mp+1 ; iterate. In the
worst-case scenario, p grows to n + 1, at which point
?? all? approximate second-order critical points
2m should suf?ce for C bounded away from
are approximate optima. Theorem 2 suggests p =
a zero-measure set. Such an algorithm already features with less theory in [Journ?ee et al., 2010]
and [Boumal, 2015]; in the latter, it is called the Riemannian staircase, for it lifts (P) ?oor by ?oor.
Related work
Low-rank approaches to solve SDP?s have featured in a number of recent research papers. We
highlight just two which illustrate different classes of SDP?s of interest.
Shah et al. [2016] tackle SDP?s with linear cost and linear constraints (both equalities and inequalities) via low-rank factorizations, assuming the matrices appearing in the cost and constraints are
positive semide?nite. They propose a non-trivial initial guess to partially overcome non-convexity
with great empirical results, but do not provide optimality guarantees.
Bhojanapalli et al. [2016a] on the other hand consider the minimization of a convex cost function
over positive semide?nite matrices, without constraints. Such problems could be obtained from
generic SDP?s by penalizing the constraints in a Lagrangian way. Here too, non-convexity is partially
overcome via non-trivial initialization, with global optimality guarantees under some conditions.
Also of interest are recent results about the harmlessness of non-convexity in low-rank matrix completion [Ge et al., 2016, Bhojanapalli et al., 2016b]. Similarly to the present work, the authors there
show there is no need for special initialization despite non-convexity.
4
With p = n + 1, problem (P) is no longer lower dimensional than (SDP), but retains the advantage of not
involving a positive semide?niteness constraint.
5
It may be more practical to test ?min (S) (14) rather than ?min (Hessf ). Lemma 7 relates the two.
See [Journ?ee et al., 2010, ?3.3] to construct escape tangent vectors from S.
5
3
Discussion of the assumptions
Our main result, Theorem 2, comes with geometric assumptions on the search spaces of both (SDP)
and (P) which we now discuss. Examples of SDP?s which ?t the assumptions of Theorem 2 are
featured in the next section.
The assumption that the search space of (SDP),
C = {X ? Sn?n : A(X) = b, X ? 0},
(6)
> m as follows. For (P) to reveal the global
is compact works in pair with the assumption p(p+1)
2
optima of (SDP), it is necessary that (SDP) admits a solution of rank at most p. One way to ensure
this is via the Pataki?Barvinok theorems [Pataki, 1998, Barvinok, 1995], which state that all extreme
? m. Extreme points are faces of dimension zero (such
points of C have rank r bounded as r(r+1)
2
as vertices for a cube). When optimizing a linear cost function ?C, X? over a compact convex set C,
at least one extreme point is a global optimum [Rockafellar, 1970, Cor. 32.3.2]?this is not true in
general if C is not compact. Thus, under the assumptions of Theorem 2, there is a point Y ? M such
that X = Y Y ? is an optimal extreme point of (SDP); then, of course, Y itself is optimal for (P).
In general, the Pataki?Barvinok bound is tight, in that there exist extreme points of rank up to that
upper-bound (rounded down)?see for example [Laurent and Poljak, 1996] for the Max-Cut SDP
and [Boumal, 2015] for the Orthogonal-Cut SDP. Let C (the cost matrix) be the negative of such an
?m
extreme point. Then, the unique optimum of (SDP) is that extreme point, showing that p(p+1)
2
is necessary for (SDP) and (P) to be equivalent for all C. We further require a strict inequality
because our proof relies on properties of rank de?cient Y ?s in M.
The assumption that M (eq. (1)) is a smooth manifold works in pair with the ambition that the result should hold for (almost) all cost matrices C. The starting point is that, for a given non-convex
smooth optimization problem?even a quadratically constrained quadratic program?computing local optima is hard in general [Vavasis, 1991]. Thus, we wish to restrict our attention to ef?ciently
computable points, such as points which satisfy ?rst- and second-order KKT conditions for (P)?
see [Burer and Monteiro, 2003, ?2.2] and [Ruszczy?nski, 2006, ?3]. This only makes sense if global
optima satisfy the latter, that is, if KKT conditions are necessary for optimality. A global optimum
Y necessarily satis?es KKT conditions if constraint quali?cations (CQ?s) hold at Y [Ruszczy?nski,
2006]. The standard CQ?s for equality constrained programs are Robinson?s conditions
or metric
?
?
regularity (they are here equivalent). They read as follows, assuming A(Y Y ?)i = Ai , Y Y ? for
some matrices A1 , . . . , Am ? Sn?n :
CQ?s hold at Y if A1 Y, . . . , Am Y are linearly independent in Rn?p .
(7)
Considering almost all C, global optima could, a priori, be almost anywhere in M. To simplify,
we require CQ?s to hold at all Y ?s in M rather than only at the (unknown) global optima. This
turns out to be a suf?cient condition for M to be a smooth manifold of codimension m [Absil et al.,
2008, Prop. 3.3.3]. Indeed, tangent vectors Y? ? TY M (2) are exactly those vectors that satisfy
?Ai Y , Y? ? = 0: under CQ?s, the Ai Y ?s form a basis of the normal space to the manifold at Y .
Once it is decided that M must be a manifold, we can step away from the speci?c representation
of it via the matrices A1 , . . . , Am and reason about optimality conditions on the manifold directly.
Adding redundant constraints (for example, duplicating A1 ) would break the CQ?s, but not the
manifold structure. Hence, stating Theorem 2 in terms of manifolds better captures the role of M
than stating it in terms of CQ?s. See also [Andreani et al., 2010, Thm. 3.3] for a proof that requiring
M to be a manifold around Y is a type of CQ.
Finally, we note that Theorem 2 only applies for almost all C, rather than all C. To justify this
restriction, if indeed it is justi?ed, one should exhibit a matrix C that leads to suboptimal secondorder critical points while other assumptions are satis?ed. We do not have such an example. We do
observe that (Max-Cut SDP) on cycles of certain even lengths has a unique solution of rank 1, while
the corresponding (Max-Cut BM) with p = 2 has suboptimal local optima (strictly, if we quotient
out symmetries). This at least suggests it is not enough, for generic C, to set p just larger than
the rank of the solutions of the SDP. (For those same examples, at p = 3, we consistently observe
convergence to global optima.)
6
4
Examples of smooth SDP?s
The canonical examples of SDP?s which satisfy the assumptions in Theorem 2 are those where the
diagonal blocks of X or their traces are ?xed. We note that the algorithms and the theory continue
to hold for complex matrices, where the set of Hermitian matrices of size n is treated as a real
vector space of dimension n2 (instead of n(n+1)
in the real case) with inner product ?H1 , H2 ? =
2
p(p+1)
?
? {Tr(H1 H2 )}, so that occurrences of 2 are replaced by p2 .
Certain concrete examples of SDP?s include:
min ?C, X? s.t. Tr(X) = 1, X ? 0;
X
min ?C, X? s.t. diag(X) = 1, X ? 0;
X
min ?C, X? s.t. Xii = Id , X ? 0.
X
(?xed trace)
(?xed diagonal)
(?xed diagonal blocks)
Their rank-constrained counterparts read as follows (matrix norms are Frobenius norms):
min ?CY , Y ? s.t. ?Y ? = 1;
(sphere)
Y : n?p
min ?CY , Y ? s.t. Y ? = [y1
???
yn ] and ?yi ? = 1 for all i;
min ?CY , Y ? s.t. Y ? = [Y1
???
Yq ] and Yi?Yi = Id for all i.
Y : n?p
Y : qd?p
(product of spheres)
(product of Stiefel)
The ?rst example has only one constraint: the SDP always admits an optimal rank 1 solution, corresponding to an eigenvector associated to the left-most eigenvalue of C. This generalizes to the
trust-region subproblem as well.
For the second example, in the real case, p = 1 forces yi = ?1, allowing to capture combinatorial
problems such as Max-Cut [Goemans and Williamson, 1995], Z2 -synchronization [Javanmard et al.,
2015] and community detection in the stochastic block model [Abbe et al., 2016, Bandeira et al.,
2016b]. The same SDP is central in a formulation of robust PCA [McCoy and Tropp, 2011] and
is used to approximate
?? the? cut-norm of a matrix [Alon and Naor, 2006]. Theorem 2 states that for
2n is suf?cient. In the complex case, p = 1 forces |yi | = 1, allowing to
almost all C, p =
capture problems where phases must be recovered; in particular, phase synchronization [Bandeira
et al., 2016a, Singer, 2011] and phase?retrieval via Phase-Cut [Waldspurger et al., 2015]. For almost
all C, it is then suf?cient to set p = ? n + 1?.
In the third example, Y of size n ? p is divided in q slices of size d ? p, with p ? d. Each
slice has orthonormal rows. For p = d, the slices are orthogonal (or unitary) matrices, allowing
to capture Orthogonal-Cut [Bandeira et al., 2016c] and the related problems of synchronization of
rotations [Wang and Singer, 2013] and permutations. Synchronization of rotations is an important
step in simultaneous
localization
and mapping, for example. Here, it is suf?cient for almost all C to
??
?
let p =
d(d + 1)q .
SDP?s with constraints that are combinations of the above examples can also have the smoothness
property; the right-hand sides 1 and Id can be replaced by any positive de?nite right-hand sides by a
change of variables. Another simple rule to check is if the constraint matrices A1 , . . . , Am ? Sn?n
such that A(X)i = ?Ai , X? satisfy Ai Aj = 0 for all i ?= j (note that this is stronger than requiring
?Ai , Aj ? = 0), see [Journ?ee et al., 2010].
5
Conclusions
The Burer?Monteiro approach consists in replacing optimization of a linear function ?C, X? over
the convex set {X ? 0 : A(X) = b} with optimization of the quadratic function ?CY , Y ? over the
non-convex set {Y ? Rn?p : A(Y Y ?) = b}. It was previously known that, if the convex set is
compact and p satis?es p(p+1)
? m where m is the number of constraints, then these two problems
2
have the same global optimum. It was also known from [Burer and Monteiro, 2005] that spurious
local optima Y , if they exist, must map to special faces of the compact convex set, but without
statement as to the prevalence of such faces or the risk they pose for local optimization methods. In
7
this paper we showed that, if the set of X?s is compact and the set of Y ?s is a smooth manifold, and
> m, then for almost all C, the non-convexity of the problem in Y is benign, in that all
if p(p+1)
2
Y ?s which satisfy second-order necessary optimality conditions are in fact globally optimal.
We further reference the Riemannian trust-region method [Absil et al., 2007] to solve the problem in
Y , as it was recently guaranteed to converge from any starting point to a point which satis?es secondorder optimality conditions, with global convergence rates [Boumal et al., 2016]. In addition, for p =
n + 1, we guarantee that approximate satisfaction of second-order conditions implies approximate
global optimality. We note that the 1/?3 convergence rate in our results may be pessimistic. Indeed,
the numerical experiments clearly show that high accuracy solutions can be computed fast using
optimization on manifolds, at least for certain applications.
Addressing a broader class of SDP?s, such as those with inequality constraints or equality constraints
that may violate our smoothness assumptions, could perhaps be handled by penalizing those constraints in the objective in an augmented Lagrangian fashion. We also note that, algorithmically,
the Riemannian trust-region method we use applies just as well to nonlinear costs in the SDP. We
believe that extending the theory presented here to broader classes of problems is a good direction
for future work.
Acknowledgment
VV was partially supported by the Of?ce of Naval Research. ASB was supported by NSF Grant
DMS-1317308. Part of this work was done while ASB was with the Department of Mathematics at
the Massachusetts Institute of Technology. We thank Wotao Yin and Michel Goemans for helpful
discussions.
References
E. Abbe, A.S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. Information Theory, IEEE
Transactions on, 62(1):471?487, 2016.
P.-A. Absil, C. G. Baker, and K. A. Gallivan. Trust-region methods on Riemannian manifolds. Foundations of
Computational Mathematics, 7(3):303?330, 2007. doi:10.1007/s10208-005-0179-9.
P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University
Press, Princeton, NJ, 2008. ISBN 978-0-691-13298-3.
N. Alon and A. Naor. Approximating the cut-norm via Grothendieck?s inequality. SIAM Journal on Computing,
35(4):787?803, 2006. doi:10.1137/S0097539704441629.
R. Andreani, C. E. Echag?ue, and M. L. Schuverdt. Constant-rank condition and second-order constraint quali?cation. Journal of Optimization Theory and Applications, 146(2):255?266, 2010. doi:10.1007/s10957010-9671-8.
A.S. Bandeira, N. Boumal, and A. Singer. Tightness of the maximum likelihood semide?nite relaxation for
angular synchronization. Mathematical Programming, pages 1?23, 2016a. doi:10.1007/s10107-016-1059-6.
A.S. Bandeira, N. Boumal, and V. Voroninski. On the low-rank approach for semide?nite programs arising
in synchronization and community detection. In Proceedings of The 29th Conference on Learning Theory,
COLT 2016, New York, NY, June 23?26, 2016b.
A.S. Bandeira, C. Kennedy, and A. Singer. Approximating the little Grothendieck problem over the orthogonal
and unitary groups. Mathematical Programming, pages 1?43, 2016c. doi:10.1007/s10107-016-0993-7.
A.I. Barvinok. Problems of distance geometry and convex properties of quadratic maps. Discrete & Computational Geometry, 13(1):189?202, 1995. doi:10.1007/BF02574037.
S. Bhojanapalli, A. Kyrillidis, and S. Sanghavi. Dropping convexity for faster semi-de?nite optimization.
Conference on Learning Theory (COLT), 2016a.
S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global optimality of local search for low rank matrix recovery.
arXiv preprint arXiv:1605.07221, 2016b.
N. Boumal. A Riemannian low-rank method for optimization over semide?nite matrices with block-diagonal
constraints. arXiv preprint arXiv:1506.00575, 2015.
8
N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre. Manopt, a Matlab toolbox for optimization on manifolds.
Journal of Machine Learning Research, 15:1455?1459, 2014. URL http://www.manopt.org.
N. Boumal, P.-A. Absil, and C. Cartis. Global rates of convergence for nonconvex optimization on manifolds.
arXiv preprint arXiv:1605.08101, 2016.
S. Burer and R.D.C. Monteiro. A nonlinear programming algorithm for solving semide?nite programs via lowrank factorization. Mathematical Programming, 95(2):329?357, 2003. doi:10.1007/s10107-002-0352-8.
S. Burer and R.D.C. Monteiro. Local minima and convergence in low-rank semide?nite programming. Mathematical Programming, 103(3):427?444, 2005.
CVX. CVX: Matlab software for disciplined convex programming. http://cvxr.com/cvx, August 2012.
R. Ge, J.D. Lee, and T. Ma.
arXiv:1605.07272, 2016.
Matrix completion has no spurious local minimum.
arXiv preprint
M.X. Goemans and D.P. Williamson. Improved approximation algorithms for maximum cut and satis?ability problems using semide?nite programming. Journal of the ACM (JACM), 42(6):1115?1145, 1995.
doi:10.1145/227683.227684.
C. Helmberg, F. Rendl, R.J. Vanderbei, and H. Wolkowicz. An interior-point method for semide?nite programming. SIAM Journal on Optimization, 6(2):342?361, 1996. doi:10.1137/0806020.
A. Javanmard, A. Montanari, and F. Ricci-Tersenghi. Phase transitions in semide?nite relaxations. arXiv
preprint arXiv:1511.08769, 2015.
M. Journ?ee, F. Bach, P.-A. Absil, and R. Sepulchre. Low-rank optimization on the cone of positive semide?nite
matrices. SIAM Journal on Optimization, 20(5):2327?2351, 2010. doi:10.1137/080731359.
M. Laurent and S. Poljak. On the facial structure of the set of correlation matrices. SIAM Journal on Matrix
Analysis and Applications, 17(3):530?547, 1996. doi:10.1137/0617031.
M. McCoy and J.A. Tropp. Two proposals for robust PCA using semide?nite programming. Electronic Journal
of Statistics, 5:1123?1160, 2011. doi:10.1214/11-EJS636.
Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87 of Applied optimization.
Springer, 2004. ISBN 978-1-4020-7553-7.
G. Pataki. On the rank of extreme matrices in semide?nite programs and the multiplicity of optimal eigenvalues.
Mathematics of operations research, 23(2):339?358, 1998. doi:10.1287/moor.23.2.339.
R.T. Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970.
A.P. Ruszczy?nski. Nonlinear optimization. Princeton University Press, Princeton, NJ, 2006.
S. Shah, A. Kumar, D. Jacobs, C. Studer, and T. Goldstein. Biconvex relaxation for semide?nite programming
in computer vision. arXiv preprint arXiv:1605.09527, 2016.
A. Singer. Angular synchronization by eigenvectors and semide?nite programming. Applied and Computational Harmonic Analysis, 30(1):20?36, 2011. doi:10.1016/j.acha.2010.02.001.
K.C. Toh, M.J. Todd, and R.H. T?ut?unc?u. SDPT3?a MATLAB software package for semide?nite programming.
Optimization Methods and Software, 11(1?4):545?581, 1999. doi:10.1080/10556789908805762.
S.A. Vavasis. Nonlinear optimization: complexity issues. Oxford University Press, Inc., 1991.
I. Waldspurger, A. d?Aspremont, and S. Mallat. Phase recovery, MaxCut and complex semide?nite programming. Mathematical Programming, 149(1?2):47?81, 2015. doi:10.1007/s10107-013-0738-9.
L. Wang and A. Singer. Exact and stable recovery of rotations for robust synchronization. Information and
Inference, 2(2):145?193, 2013. doi:10.1093/imaiai/iat005.
Z. Wen and W. Yin. A feasible method for optimization with orthogonality constraints. Mathematical Programming, 142(1?2):397?434, 2013. doi:10.1007/s10107-012-0584-1.
W.H. Yang, L.-H. Zhang, and R. Song. Optimality conditions for the nonlinear programming problems on
Riemannian manifolds. Paci?c Journal of Optimization, 10(2):415?434, 2014.
9
| 6517 |@word version:1 polynomial:2 norm:7 stronger:1 nd:1 open:1 linearized:1 jacob:1 tr:5 sepulchre:3 initial:1 celebrated:1 interestingly:1 mishra:1 recovered:1 z2:2 com:1 toh:1 must:4 numerical:2 informative:1 benign:1 guess:1 caveat:3 provides:1 math:2 node:2 firstly:1 org:1 zhang:1 mathematical:7 consists:1 naor:2 introductory:1 hermitian:1 introduce:1 javanmard:2 indeed:3 intricate:1 sdp:52 rem:1 globally:7 little:1 solver:1 considering:1 spain:1 xx:2 bounded:5 notation:1 hessf:10 gradf:8 baker:1 bhojanapalli:4 xed:4 eigenvector:2 developed:1 transformation:1 nj:3 guarantee:4 duplicating:1 every:1 tackle:1 exactly:1 rm:4 unit:2 grant:1 yn:1 positive:10 before:1 local:29 todd:1 despite:1 troublesome:1 id:6 oxford:1 laurent:2 approximately:3 initialization:6 suggests:4 factorization:2 decided:1 practical:2 unique:2 acknowledgment:1 practice:4 block:7 prevalence:1 nite:28 featured:2 empirical:2 projection:3 word:1 numbered:1 studer:1 oor:2 mahony:1 interior:3 superlinear:2 operator:3 close:1 unc:1 context:2 risk:1 restriction:2 equivalent:3 map:5 demonstrated:1 center:1 lagrangian:2 www:1 straightforward:1 regardless:2 starting:3 attention:1 convex:19 recovery:4 rule:1 importantly:1 orthonormal:1 handle:1 mallat:1 exact:3 programming:17 secondorder:2 satisfying:1 cut:22 role:1 subproblem:2 preprint:6 solved:1 capture:4 worst:2 thousand:1 cy:7 region:6 wang:2 cycle:1 mentioned:1 convexity:6 complexity:2 nesterov:2 tight:1 solving:1 localization:1 basis:1 easily:1 fast:1 shortcoming:1 doi:18 lift:1 larger:1 solve:8 tightness:1 ability:1 statistic:1 itself:2 semide:25 advantage:1 eigenvalue:4 isbn:2 propose:1 product:5 maximal:1 asserts:1 frobenius:2 scalability:1 waldspurger:2 rst:9 convergence:8 regularity:1 optimum:32 extending:1 produce:1 converges:1 illustrate:2 alon:2 completion:2 pose:1 stating:2 lowrank:1 eq:2 p2:1 quotient:1 implies:2 come:1 qd:1 direction:1 stochastic:3 adjacency:1 require:3 ricci:1 opt:2 proposition:4 pessimistic:1 secondly:1 escapable:1 strictly:1 hold:9 asb:2 around:1 hall:1 normal:1 great:1 mapping:1 combinatorial:1 mutandis:1 establishes:1 moor:1 minimization:1 mit:1 clearly:1 always:1 rather:4 mccoy:2 broader:2 corollary:1 june:1 naval:1 consistently:1 rank:29 check:1 likelihood:1 absil:11 sense:2 am:4 posteriori:1 helpful:1 inference:1 minimizers:2 compactness:1 spurious:6 journ:6 voroninski:2 monteiro:15 issue:2 colt:2 priori:3 extraneous:1 constrained:5 special:2 cube:1 equal:1 construct:1 never:1 having:1 x2n:1 once:1 identical:1 abbe:2 future:1 np:1 sanghavi:1 simplify:1 escape:3 few:2 wen:1 replaced:2 phase:9 geometry:2 lebesgue:1 detection:4 interest:2 satis:7 evaluation:1 generically:2 extreme:8 yielding:1 edge:1 necessary:9 facial:1 orthogonal:5 vladislav:1 euclidean:2 poljak:2 initialized:1 isolated:2 theoretical:1 retains:1 cost:16 introducing:1 vertex:1 addressing:1 subset:1 submanifold:2 rounding:1 too:1 nski:4 st:1 siam:4 lee:1 invertible:1 rounded:1 concrete:1 central:1 prompting:1 return:2 michel:1 exclude:2 de:7 includes:1 rockafellar:2 inc:1 barvinok:5 satisfy:8 mp:11 explicitly:1 try:1 break:1 h1:2 square:1 accuracy:2 yield:1 acha:1 helmberg:1 liable:1 kennedy:1 ago:1 cation:2 simultaneous:1 afonso:1 s10107:5 ed:4 ty:6 dm:1 naturally:1 associated:3 riemannian:13 proof:6 vanderbei:1 wolkowicz:1 massachusetts:2 ut:1 improves:1 actually:1 goldstein:1 higher:2 courant:1 specify:1 disciplined:1 improved:1 formulation:2 done:1 though:1 furthermore:1 just:3 anywhere:1 angular:2 correlation:1 hand:4 tropp:2 trust:6 replacing:1 nonlinear:5 quality:1 reveal:1 aj:2 perhaps:1 grows:1 believe:1 effect:2 staircase:1 true:2 requiring:2 counterpart:1 equality:5 hence:2 read:2 symmetric:5 ue:1 biconvex:1 complete:1 confusion:1 orthogonalization:1 wigner:1 stiefel:1 harmonic:1 ef:2 recently:1 endowing:1 rotation:5 perturbing:1 volume:1 discussed:2 comfortably:1 ai:6 smoothness:4 unconstrained:1 mathematics:6 similarly:2 maxcut:1 stable:1 longer:1 something:1 fortiori:1 recent:3 showed:1 optimizing:1 scenario:1 certain:3 nonconvex:2 bandeira:9 inequality:4 success:1 arbitrarily:1 continue:1 yi:5 nition:1 minimum:6 fortunately:1 simpli:2 speci:3 converge:4 redundant:1 ii:1 resolving:1 relates:1 violate:1 semi:1 pataki:5 smooth:12 faster:1 burer:15 af:1 sphere:3 retrieval:3 bach:1 divided:1 equally:1 a1:5 controlled:2 ambition:1 rendl:1 involving:1 basic:1 essentially:1 metric:2 vision:1 arxiv:12 iteration:3 sometimes:1 achieved:1 proposal:1 addition:1 remarkably:1 crucial:1 extra:1 exhibited:1 strict:1 subject:2 undirected:1 spirit:1 seem:1 ciently:1 ee:6 near:1 yang:2 unitary:2 intermediate:1 enough:2 iterate:1 harbor:1 xj:1 codimension:1 restrict:1 suboptimal:2 inner:2 kyrillidis:1 computable:2 motivated:1 pca:4 handled:1 sdpt3:1 url:1 song:1 york:2 hessian:6 matlab:3 detailed:1 eigenvectors:1 amount:1 locally:1 http:2 vavasis:3 exist:2 restricts:1 canonical:1 nsf:1 notice:1 sign:1 algorithmically:1 arising:1 xii:1 write:2 discrete:1 dropping:1 group:1 penalizing:2 ce:2 graph:3 relaxation:4 cone:2 enforced:1 run:1 package:1 almost:16 electronic:1 cvx:3 appendix:4 misunderstood:1 acceptable:1 capturing:1 bound:10 guaranteed:3 correspondence:1 quadratic:5 occur:1 constraint:25 orthogonality:1 software:3 min:16 optimality:21 kumar:1 department:4 maxn:1 combination:2 slightly:1 y0:2 making:2 restricted:5 multiplicity:1 equation:1 remains:2 previously:2 turn:2 count:1 discus:1 singer:6 ge:2 cor:2 unusual:1 available:1 generalizes:1 operation:1 neyshabur:1 apply:3 observe:2 away:2 generic:2 appearing:1 occurrence:1 alternative:1 schmidt:1 shah:2 rp:1 existence:4 clustering:1 ensure:1 include:1 x21:1 establish:1 approximating:2 classical:2 s10208:1 unchanged:1 objective:2 question:1 already:2 ruszczy:4 dependence:1 diagonal:4 surrogate:2 unclear:1 exhibit:1 gradient:5 distance:1 thank:1 manifold:22 trivial:2 reason:1 assuming:3 characteristically:1 length:1 cq:8 ratio:1 unfortunately:2 cij:1 potentially:2 statement:5 trace:5 stated:1 negative:2 reliably:1 unknown:1 contributed:1 allowing:3 upper:1 wotao:1 observation:1 manopt:2 t:1 criticality:1 y1:2 rn:11 arbitrary:1 thm:1 august:1 community:4 pair:2 bene:1 toolbox:1 quadratically:2 established:1 barcelona:1 nip:1 robinson:1 address:1 beyond:1 below:5 program:10 max:13 memory:1 explanation:1 including:1 gallivan:1 critical:16 satisfaction:1 treated:1 force:2 scheme:1 technology:2 yq:1 ne:1 cim:1 conic:1 aspremont:1 grothendieck:2 sn:11 literature:1 geometric:1 tangent:6 synchronization:12 expect:1 highlight:1 permutation:1 lecture:1 suf:7 interesting:1 srebro:1 h2:2 foundation:1 row:2 course:2 surprisingly:1 supported:2 side:3 vv:1 institute:3 boumal:13 face:4 differentiating:1 slice:3 overcome:2 dimension:2 gram:1 transition:1 author:3 coincide:1 bm:3 far:1 transaction:1 approximate:12 compact:12 global:27 kkt:3 xi:1 factorizing:1 search:12 continuous:1 decade:1 robust:5 nicolas:1 ignoring:1 symmetry:1 williamson:3 necessarily:1 complex:3 diag:4 main:6 montanari:1 linearly:2 n2:1 cvxr:1 xu:1 augmented:1 cient:9 join:1 rtr:7 fashion:1 ny:1 explicit:1 ciency:1 wish:1 third:1 justi:1 theorem:18 down:1 quali:2 showing:1 nyu:1 admits:3 andreani:2 exists:1 adding:1 nec:2 cartesian:1 gap:4 led:1 yin:2 simply:1 saddle:3 jacm:1 expressed:2 mutatis:1 partially:3 applies:2 springer:1 tersenghi:1 relies:1 acm:1 ma:1 prop:2 identity:2 lipschitz:1 feasible:5 adverse:1 hard:3 change:2 justify:1 lemma:1 called:1 goemans:4 e:6 cartis:1 cond:2 formally:2 support:2 latter:2 princeton:8 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.